This is simply exposed as a table from tag -> true. While this is less
natural than an array, it allows for easy esting of whether a tag is
present.
Closes#461
- Use texture over texture2D - the latter was deprecated in GLSL 1.30.
- Cache the tbo buffer - this saves an allocation when monitors update.
Closes#455. While the rest of the PR has some nice changes, it
performs signlificantly worse on my system.
This moves monitor networking into its own packet, rather than serialising
using NBT. This allows us to be more flexible with how monitors are
serialised.
We now compress terminal data using gzip. This reduces the packet size
of a max-sized-monitor from ~25kb to as little as 100b.
On my test set of images (what I would consider to be the extreme end of
the "reasonable" case), we have packets from 1.4kb bytes up to 12kb,
with a mean of 6kb. Even in the worst case, this is a 2x reduction in
packet size.
While this is a fantastic win for the common case, it is not abuse-proof.
One can create a terminal with high entropy (and so uncompressible). This
will still be close to the original packet size.
In order to prevent any other abuse, we also limit the amount of monitor
data a client can possibly receive to 1MB (configurable).
It should be release quality in all honesty[^1], but let's leave it a
few days to see if any issues trickle in.
[^1]: Well, aside from upsidedown turtles!
timetout, max_upload, max_download and max_websocket_message may now be
configured on a domain-by-domain basis. This uses the same system that
we use for the block/allow-list from before:
Example:
[[http.rules]]
host = "*"
action = "allow"
max_upload = 4194304
max_download = 16777216
timeout = 30000
This registers IPeripheral as a capability. As a result, all (Minecraft
facing) functionality operates using LazyOptional<_>s instead.
Peripheral providers should now return a LazyOptional<IPeripheral> too.
Hopefully this will allow custom peripherals to mark themselves as
invalid (say, because a dependency has changed).
While peripheral providers are somewhat redundant, they still have their
usages. If a peripheral is applied to a large number of blocks (for
instance, all inventories) then using capabilities does incur some
memory overhead.
We also make the following changes based on the above:
- Remove the default implementation for IWiredElement, migrating the
definition to a common "Capabilities" class.
- Remove IPeripheralTile - we'll exclusively use capabilities now.
Absurdly this is the most complex change, as all TEs needed to be
migrated too.
I'm not 100% sure of the correctness of this changes so far - I've
tested it pretty well, but blocks with more complex peripheral logic
(wired/wireless modems and turtles) are still a little messy.
- Remove the "command block" peripheral provider, attaching a
capability instead.
When creating a peripheral or custom Lua object, one must implement two
methods:
- getMethodNames(): String[] - Returns the name of the methods
- callMethod(int, ...): Object[] - Invokes the method using an index in
the above array.
This has a couple of problems:
- It's somewhat unwieldy to use - you need to keep track of array
indices, which leads to ugly code.
- Functions which yield (for instance, those which run on the main
thread) are blocking. This means we need to spawn new threads for
each CC-side yield.
We replace this system with a few changes:
- @LuaFunction annotation: One may annotate a public instance method
with this annotation. This then exposes a peripheral/lua object
method.
Furthermore, this method can accept and return a variety of types,
which often makes functions cleaner (e.g. can return an int rather
than an Object[], and specify and int argument rather than
Object[]).
- MethodResult: Instead of returning an Object[] and having blocking
yields, functions return a MethodResult. This either contains an
immediate return, or an instruction to yield with some continuation
to resume with.
MethodResult is then interpreted by the Lua runtime (i.e. Cobalt),
rather than our weird bodgey hacks before. This means we no longer
spawn new threads when yielding within CC.
- Methods accept IArguments instead of a raw Object array. This has a
few benefits:
- Consistent argument handling - people no longer need to use
ArgumentHelper (as it doesn't exist!), or even be aware of its
existence - you're rather forced into using it.
- More efficient code in some cases. We provide a Cobalt-specific
implementation of IArguments, which avoids the boxing/unboxing when
handling numbers and binary strings.
- cc.pretty.pretty now accepts two additional options:
- function_args: Show function arguments
- function_source: Show where functions are defined.
- Expose the two options as lua.* settings (defaulting function_args to
true, and function_source to false).
These are then used in the Lua REPL.
Closes#361
- Use jacoco for Java-side coverage. Our Java coverage is /terrible
(~10%), as we only really test the core libraries. Still a good thing
to track for regressions though.
- mcfly now tracks Lua side coverage. This works in several stages:
- Replace loadfile to include the whole path
- Add a debug hook which just tracks filename->(lines->count). This
is then submitted to the Java test runner.
- On test completion, we emit a luacov.report.out file.
As the debug hook is inserted by mcfly, this does not include any
computer startup (such as loading apis, or the root of bios.lua),
despite they're executed.
This would be possible to do (for instance, inject a custom header
into bios.lua). However, we're not actually testing any of the
behaviour of startup (aside from "does it not crash"), so I'm not
sure whether to include it or not. Something I'll most likely
re-evaluate.
`local varname = value` results in `varname` being inaccessible in
the next REPL input. This is often unintended and can lead to confusing
behaviour. We produce a warning when this occurs.
This uses the system described in #409, to render monitors in a more
efficient manner.
Each monitor is backed by a texture buffer object (TBO) which contains
a relatively compact encoding of the terminal state. This is then
rendered using a shader, which consumes the TBO and uses it to index
into main font texture.
As we're transmitting significantly less data to the GPU (only 3 bytes
per character), this effectively reduces any update lag to 0. FPS appears
to be up by a small fraction (10-15fps on my machine, to ~110), possibly
as we're now only drawing a single quad (though doing much more work in
the shader).
On my laptop, with its Intel integrated graphics card, I'm able to draw
120 full-sized monitors (with an effective resolution of 3972 x 2330) at
a consistent 60fps. Updates still cause a slight spike, but we always
remain above 30fps - a significant improvement over VBOs, where updates
would go off the chart.
Many thanks to @Lignum and @Lemmmy for devising this scheme, and helping
test and review it! ♥
Unknown blit colours, such as " " will be translated to black for the
background or white for the foreground. This restores the behaviour from
before #412.
- Write to a PacketBuffer instead of generating an NBT tag. This is
then converted to an NBT byte array when we send across the network.
- Pack background/foreground colours into a single byte.
This derives from some work I did back in 2017, and some of the changes
made/planned in #409. However, this patch does not change how terminals
are represented, it simply makes the transfer more compact.
This makes the patch incredibly small (100 lines!), but also limited in
what improvements it can make compared with #409. We send 26626 bytes
for a full-sized monitor. While a 2x improvement over the previous 58558
bytes, there's a lot of room for improvement.