Let's see how this goes.
- Update references to the new repo
- Use rrsync on the server, meaning make-doc.sh uploads relative to the
website root.
- Bump Gradle wrapper to 7.2. Not related to this change, but possibly
fixes running under Java 16. Possibly.
This spins up a Minecraft instance (much like we do for the server) and
instructs the client to take screenshots at particular times. We then
compare those screenshots and assert they match (modulo some small
delta).
- Add a basic problem matcher for illuaminate errors.
- Add a script (tools/parse-reports.py) which parses the XML reports
generated by checkstyle and junit, extracts source locations, and
emits them in a manner which can be consumed by another set of
matchers.
This should make it a little easier to see problems for folks who just
rely on CI to test things (though also, please don't do this if you can
help it).
This uses pre-commit [1] to check patches are well formed and run
several linters on them. We currently do some boring things (check files
are syntactically valid) as well as some project-specific ones:
- Run illuaminate on the Lua files
- Run checkstyle on Java
[1]: https://pre-commit.com/
More importantly, `./gradlew check' actually runs the in-game tests,
which makes the CI steps look a little more sensible again.
Somewhat depressing that one of the longest files (15th) in CC:T is the
build script.
Name a more iconic duo than @SquidDev and over-engineered test
frameworks.
This uses Minecraft's test core[1] plus a home-grown framework to run
tests against computers in-world.
The general idea is:
- Build a structure in game.
- Save the structure to a file. This will be spawned in every time the
test is run.
- Write some code which asserts the structure behaves in a particular
way. This is done in Kotlin (shock, horror), as coroutines give us a
nice way to run asynchronous code while still running on the main
thread.
As with all my testing efforts, I still haven't actually written any
tests! It'd be good to go through some of the historic ones and write
some tests though. Turtle block placing and computer redstone
interactions are probably a good place to start.
[1]: https://www.youtube.com/watch?v=vXaWOJTCYNg
Provides a basic interface for running examples on tweaked.cc. This is probably
janky as anything, but it works on my machine.
This is the culmination of 18 months of me building far too much infrastructure
(copy-cat, illuaminate), so that's nice I guess.
I should probably get out more.
illuaminate does not handle Java files, for obvious reasons. In order to
get around that, we have a series of stub files within /doc/stub which
mirrored the Java ones. While this works, it has a few problems:
- The link to source code does not work - it just links to the stub
file.
- There's no guarantee that documentation remains consistent with the
Java code. This change found several methods which were incorrectly
documented beforehand.
We now replace this with a custom Java doclet[1], which extracts doc
comments from @LuaFunction annotated methods and generates stub-files
from them. These also contain a @source annotation, which allows us to
correctly link them back to the original Java code.
There's some issues with this which have yet to be fixed. However, I
don't think any of them are major blockers right now:
- The custom doclet relies on Java 9 - I think it's /technically/
possible to do this on Java 8, but the API is significantly uglier.
This means that we need to run javadoc on a separate JVM.
This is possible, and it works locally and on CI, but is definitely
not a nice approach.
- illuaminate now requires the doc stubs to be generated in order for
the linter to pass, which does make running the linter locally much
harder (especially given the above bullet point).
We could notionally include the generated stubs (or at least a cut
down version of them) in the repo, but I'm not 100% sure about that.
[1]: https://docs.oracle.com/javase/9/docs/api/jdk/javadoc/doclet/package-summary.html
- Use jacoco for Java-side coverage. Our Java coverage is /terrible
(~10%), as we only really test the core libraries. Still a good thing
to track for regressions though.
- mcfly now tracks Lua side coverage. This works in several stages:
- Replace loadfile to include the whole path
- Add a debug hook which just tracks filename->(lines->count). This
is then submitted to the Java test runner.
- On test completion, we emit a luacov.report.out file.
As the debug hook is inserted by mcfly, this does not include any
computer startup (such as loading apis, or the root of bios.lua),
despite they're executed.
This would be possible to do (for instance, inject a custom header
into bios.lua). However, we're not actually testing any of the
behaviour of startup (aside from "does it not crash"), so I'm not
sure whether to include it or not. Something I'll most likely
re-evaluate.
This adds documentation comments to many of CC's Lua APIs, and
a couple of the Java ones, through the use of stubs. We then
export these to HTML using illuaminate [1] and upload them to our
documentation site [2].
Uploads currently occur on pushes to master and any release/tag. The
site is entirely static - there is no way to switch between versions,
etc... but hopefully we can improve this in the future.
[1]: github.com/SquidDev/illuaminate/
[2]: https://tweaked.cc/
We now use illuaminate[1]'s linting facilities to check the rom and
bios.lua for a couple of common bugs and other problems.
Right now this doesn't detect any especially important bugs, though it
has caught lots of small things (unused variables, some noisy code). In
the future, the linter will grow in scope and features, which should
allow us to be stricter and catch most issues.
As a fun aside, we started off with ~150 bugs, and illuaminate was able
to fix all but 30 of them, which is pretty neat.
[1]: https://github.com/SquidDev/illuaminate
- Add a link to discord and the forums. Oh goodness, I hope this
doesn't count as making things official.
I've had a couple of people email me with support requests, which is
/fine/, but there's better channels for it!
- Add a couple of PR templates
- Ask people to write tests for CraftOS changes. This is a standard
I'm trying to impose on myself, so seems reasonable to impose on
everyone. Sorry!
- Require same levels of explanation for PRs as we do for issues.
- Try to expand on the feature request "rationale" section a little.
Just trying to explan my process a little bit more.
I'm not entirely a fan of massive templates, but there's
been a couple of lacklustre issues recently, so it's probably
good to formalise my guidelines.