- Bump Forge version to latest RB.
- Generate an 8-bit audio stream again, as we no longer need to be
compatible with MC's existing streams.
No functionality changes, just mildly less hacky.
GlStateManager.glDeleteBuffers clears a buffer before deleting it on
Linux - I assume otherwise there's memory leaks on some drivers? - which
clobbers BufferUploader's cache. Roll our own version which resets the
cache when needed.
Also always reset the cache when deleting/creating a DirectVertexBuffer.
This /significantly/ improves performance of the VBO renderer (3fps to
80fps with 120 constantly changing monitors) and offers some minor FPS
improvements to the TBO renderer.
This also makes the 1.16 rendering code a little more consistent with
the 1.18 code, cleaning it up a little in the process.
Closes#1065 - this is a backport of those changes for 1.16. I will
merge these changes into 1.18, as with everything else (oh boy, that'll
be fun).
Please note this is only tested on my machine right now - any help
testing on other CPU/GPU configurations is much appreciated.
Historically I've been reluctant to do this as people might be running
Optifine for performance rather than shaders, and the VBO renderer was
significantly slower when monitors were changing.
With the recent performance optimisations, the difference isn't as bad.
Given how many people ask/complain about the TBO renderer and shaders, I
think it's worth doing this, even if it's not as granular as I'd like.
Also changes how we do the monitor backend check. We now only check for
compatibility if BEST is selected - if there's an override, we assume
the user knows what they're doing (a bold assumption, if I may say so
myself).
- For TBOs, we now pass cursor position, colour and blink state as
variables to the shader, and use them to overlay the cursor texture
in the right place.
As we no longer need to render the cursor, we can skip the depth
buffer, meaning we have to do one fewer upload+draw cycle.
- For VBOs, we bake the cursor into the main VBO, and switch between
rendering n and n+1 quads. We still need the depth blocker, but can
save one upload+draw cycle when the cursor is visible.
This saves significant time on the TBO renderer - somewhere between 4
and 7ms/frame, which bumps us up from 35 to 47fps on my test world (480
full-sized monitors, changing every tick). [Taken on 1.18, but should be
similar on 1.16]
Like #455, this sets our uniforms via a UBO rather than having separate
ones for each value. There are a couple of small differences:
- Have a UBO for each monitor, rather than sharing one and rewriting it
every monitor. This means we only need to update the buffer when the
monitor changes.
- Use std140 rather than the default layout. This means we don't have
to care about location/stride in the buffer.
Also like #455, this doesn't actually seem to result in any performance
improvements for me. However, it does make it a bit easier to handle a
large number of uniforms.
Also cleans up the generation of the main monitor texture buffer:
- Move buffer generation into a separate method - just ensures that it
shows up separately in profilers.
- Explicitly pass the position when setting bytes, rather than
incrementing the internal one. This saves some memory reads/writes (I
thought Java optimised them out, evidently not!). Saves a few fps
when updating.
- Use DSA when possible. Unclear if it helps at all, but nice to do :).
This takes a non-trivial amount of time on the render thread[^1], so
worth doing.
I don't actually think the allocation is the heavy thing here -
VisualVM says it's toWorldPos being slow. I'm not sure why - possibly
just all the block property lookups? [^2]
[^1]: To be clear, this is with 120 monitors and no other block entities
with custom renderers. so not really representative.
[^2]: I wish I could provide a narrower range, but it varies so much
between me restarting the game. Makes it impossible to benchmark
anything!
The VBO renderer needs to generate a buffer with two quads for each
cell, and then transfer it to the GPU. For large monitors, generating
this buffer can get quite slow. Most of the issues come from
IVertexBuilder (VertexConsumer under MojMap) having a lot of overhead.
By emitting a ByteBuffer directly (and doing so with Unsafe to avoid
bounds checks), we can improve performance 10 fold, going from
3fps/300ms for 120 monitors to 111fps/9ms.
See 41fa95bce4 and #1065 for some more
context and other exploratory work. The key thing to note is we _need_ a
separate version of FWFR for emitting to a ByteBuffer, as introducing
polymorphism to it comes with a significant performance hit.
- Move all RenderType instances into a common class.
Cherry-picked from 41fa95bce4:
- Render GL_QUADS instead of GL_TRIANGLES.
- Remove any "immediate mode" methods from FWFR. Most use-cases can be
replaced with the global MultiBufferSource and a proper RenderType
(which we weren't using correctly before!).
Only the GUI code (WidgetTerminal) needs to use the immediate mode.
- Pre-convert palette colours to bytes, storing both the coloured and
greyscale versions as a byte array.
Cherry-picked from 3eb601e554:
- Pass lightmap variables around the various renderers. Fixes#919 for
1.16!
"Instead, it is a standard program, which its API into the programs that it launches."
becomes
"Instead, it is a standard program, which injects its API into the programs that it launches."
A little shorter and more explicit than constructing the Vector3d
manually. Fixes an issue where sounds were centered on the bottom left
of speakers, not the middle (see cc-tweaked/cc-restitched#85).
See #1061, closes#1064.
Nobody ever seems to implement this correctly (though it's better than
1.12, at least we've not seen any crashes), and this isn't a fight I
care enough about fighting any more.
- Remove the POSITION_COLOR render type. Instead we just render a
background terminal quad as the pocket computer light - it's a little
(lot?) more cheaty, but saves having to create a render type.
- Use the existing position_color_tex shader instead of our copy. I
looked at using RenderType.text, but had a bunch of problems with GUI
terminals. Its possible we can fix it, but didn't want to spend too
much time on it.
- Remove some methods from FixedWidthFontRenderer, inlining them into
the call site.
- Switch back to using GL_QUADS rather than GL_TRIANGLES. I know Lig
will shout at me for this, but the rest of MC uses QUADS, so I don't
think best practice really matters here.
- Fix the TBO backend monitor not rendering monitors with fog.
Unfortunately we can't easily do this to the VBO one without writing
a custom shader (which defeats the whole point of the VBO backend!),
as the distance calculation of most render types expect an
already-transformed position (camera-relative I think!) while we pass
a world-relative one.
- When rendering to a VBO we push vertices to a ByteBuffer directly,
rather than going through MC's VertexConsumer system. This removes
the overhead which comes with VertexConsumer, significantly improving
performance.
- Pre-convert palette colours to bytes, storing both the coloured and
greyscale versions as a byte array. This allows us to remove the
multiple casts and conversions (double -> float -> (greyscale) ->
byte), offering noticeable performance improvements (multiple ms per
frame).
We're using a byte[] here rather than a record of three bytes as
notionally it provides better performance when writing to a
ByteBuffer directly compared to calling .put() four times. [^1]
- Memorize getRenderBoundingBox. This was taking about 5% of the total
time on the render thread[^2], so worth doing.
I don't actually think the allocation is the heavy thing here -
VisualVM says it's toWorldPos being slow. I'm not sure why - possibly
just all the block property lookups? [^2]
Note that none of these changes improve compatibility with Optifine.
Right now there's some serious issues where monitors are writing _over_
blocks in front of them. To fix this, we probably need to remove the
depth blocker and just render characters with a z offset. Will do that
in a separate commit, as I need to evaluate how well that change will
work first.
The main advantage of this commit is the improved performance. In my
stress test with 120 monitors updating every tick, I'm getting 10-20fps
[^3] (still much worse than TBOs, which manages a solid 60-100).
In practice, we'll actually be much better than this. Our network
bandwidth limits means only 40 change in a single tick - and so FPS is
much more reasonable (+60fps).
[^1]: In general, put(byte[]) is faster than put(byte) multiple times.
Just not clear if this is true when dealing with a small (and loop
unrolled) number of bytes.
[^2]: To be clear, this is with 120 monitors and no other block entities
with custom renderers. so not really representative.
[^3]: I wish I could provide a narrower range, but it varies so much
between me restarting the game. Makes it impossible to benchmark
anything!
Forge 4.0.18 deprecated a lot of methods and moved where
RegistryEvent.NewRegistry lives, so we needed to update. This does break
the CC API a little bit (sorry!) though given Forge 1.18.2 is still in
flux, that's probably inevitable.
There's a couple of alternative ways to solve this. Ideally we'd send
our network messages at the same time as MC does
(ChunkManager.playerLoadedChunk), but this'd require a mixin.
Instead we just rely on the fact that if the chunk isn't loaded,
monitors won't have done anything and so we don't need to send their
contents!
Fixes#1047, probably doesn't cause any regressions. I've not seen any
issues on 1.16, but I also hadn't before so ¯\_(ツ)_/¯.
It's now impossible to run the client tests (tests are still there, but
none of the other infrastructure is). We've not run these for months now
due to their severe flakiness :(.
This was added in the 1.13 update and I'm still not sure why. Other mods
seem to get away without it, so I think it's fine to remove.
Also remove the fake net manager, as that's part of Forge nowadays.
Fixes#1044.
- Fixes#1026
- The remaining bytes counter wasn't being decremented, so the code that
splits off smaller packets was unreachable. Thus all file slices were
being put into a single UploadFileMessage packet.