Ever since 1.17, turtle and pocket upgrades have been loaded from
datpacks, rather than being hard-coded in Java. However, upgrades have
always been stored in our own registry-like structure, rather than using
vanilla's registries.
This has become a bit of a problem with the introduction of components.
The upgrade components now hold the upgrade object (rather than just its
id), which means we need the upgrades to be available much earlier (e.g.
when reading recipes).
The easiest fix here is to store upgrades in proper registries instead.
This means that upgrades can no longer be reloaded (it requires a world
restart), but otherwise is much nicer:
- UpgradeData now stores a Holder<T> rather than a T.
- UpgradeSerialiser has been renamed to UpgradeType. This now just
provides a Codec<T>, rather than JSON and network reading/writing
functions.
- Upgrade classes no longer implement getUpgradeID(), but instead have
a getType() function, which returns the associated UpgradeType.
- Upgrades are now stored in turtle_upgrade (or pocket_upgrade) rather
than turtle_upgrades (or pocket_upgrades). This will break existing
datapacks, sorry!
This adds a new "recipe function" system, that allows transforming the
result of a recipe according to some datapack-defined function.
Currently, we only provide one function: computercraft:copy_components,
which copies components from one of the ingredients to the result. This
allows us to replace several of our existing recipes:
- Turtle overlay recipes are now defined as a normal shapeless recipe
that copies all (non-overlay) components from the input turtle.
- Computer conversion recipes (e.g. computer -> turtle, normal ->
advanced) copy all components from the input computer to the result.
This is more complex (and thus more code), but also a little more
flexible, which hopefully is useful for someone :).
In 1.20.1, Forge and Fabric have different "common" tag conventions (for
instance, Forge uses forge:dusts/redstone, while Fabric uses
c:redstone_dusts). This means the generated recipes (and advancements)
will be different for the two loader projects. As such, we run data
generators for each loader, and store the results separately.
However, aside from some recipes and advancements, most resources /are/
the same between the two. This means we end up with a lot of duplicate
files, which make the diff even harder to read. This gets worse in
1.20.5, when NeoForge and Fabric have (largely) unified their tag names.
This commit now merges the generated resources of the two loaders,
moving shared files to the common project.
- Add a new MergeTrees command, to handle the de-duplication of files.
- Change the existing runData tasks to write to
build/generatedResources.
- Add a new :common:runData task, that reads from the
build/generatedResources folder and writes to the per-project
src/generated/resources.
NF now loads mods from neoforge.mods.toml rather than mods.toml, so CC
wasn't actually being loaded. Tests all passed, because they didn't get
run in the first place!
This fixes several issues with @Nullable fields not being checked. This
is great in principle, but a little annoying in practice as MC's
@Nullable annotations are sometimes a little overly strict -- we now
need to wrap a couple of things in assertNonNull checks.
This tells Create that modems will pop-off if their neighbour is moved,
and so changes the order that the block is moved in.
We possibly should use BlockMovementChecks.AttachedCheck instead, to
properly handle the direction modems are facing in. However, this
doesn't appear to be part of the public API, so probably best avoided.
Fixes#948
The two mod loaders expose different methods for this (Forge's method
takes a ItemPropertyFunction, Fabric's a ClampedItemPropertyFunction).
This is fine in a Gradle build, as the methods are compatible. However,
when running from IntelliJ, we get crashes as the common code tries to
reference the wrong method.
We now pass in the method reference instead, ensuring we use the right
method on each loader.
Double chests peripherals were getting reattached every time there was a
block update, as the inventories were not comparing equal (despite being
so!). We now check for a couple of common cases, which should be enough
for vanilla/vanilla-like inventories.
I actively Do Not Like This Code, but do not see a good alternative.
This should never happen, but apparently it does!? We now log an error
(rather than crashing), and include the original BE (and associated
block), as the BE type isn't very useful.
See #1750. Technically this fixes it, but want to do some more poking
there first.
Our GatedPredicate hack was clever, but also fundamentally didn't work.
The predicate is called before extraction, so if extraction fails (for
instance, canTakeItemThroughFace returns false), then we still think an
item has been removed.
To fix that, we inline StorageUtil.move, specialising it for what we
need.
This adds support for computer selectors, in the style of entity
selectors. The long-term goal here is to replace our existing ad-hoc
selectors. However, to aid migration, we currently support both - the
previous one will most likely be removed in MC 1.21.
Computer selectors take the form @c[<key>=<value>,...]. Currently we
support filtering by id, instance id, label, family (as before) and
distance from the player (new!). The code also supports computers within
a bounding box, but there's no parsing support for that yet.
This commit also (finally) documents the /computercraft command. Well,
sort of - it's definitely not my best word, but I couldn't find better
words.
Minecraft sometimes keeps chunks in-memory, but not actively loaded. If
we schedule a block entity to be ticked and that chunk is is then
transitioned to this partially-loaded state, then the block entity is
never actually ticked.
This is most visible with monitors. When a monitor's contents changes,
if the monitor is not already marked as changed, we set it as changed
and schedule a tick (see ServerMonitor). However, if the tick is
dropped, we don't clear the changed flag, meaning subsequent changes
don't requeue the monitor to be ticked, and so the monitor is never
updated.
We fix this by maintaining a list of block entities whose tick was
dropped. If these block entities (or rather their owning chunk) is ever
re-loaded, then we reschedule them to be ticked.
An alternative approach here would be to add the scheduled tick directly
to the LevelChunk. However, getting hold of the LevelChunk for unloaded
blocks is quiet nasty, so I think best avoided.
Fixes#1146. Fixes#1560 - I believe the second one is a duplicate, and
I noticed too late :D.
Forge doesn't run client-side commands from sendUnsignedCommand, so we
still require a mixin there.
We do need to change the command name, as Fabric doesn't properly merge
the two command trees.
Rather than mixing-in to CachedOutput, we just wrap our DataProviders to
use a custom CachedOutput which reformats the JSON before writing. This
allows us to drop mixins for common+non-client code.
Originally we exposed a single registerTurtleUpgradeModellermethod which
could be called from both Fabric (during a mod's client init) and Forge
(during FMLClientSetupEvent).
This was fine until we allowed upgrades to specify model dependencies,
which would then automatically loaded, as this means model loading now
depends on upgrade modellers being loaded. Unknown to me, this is not
guaranteed to be the case on Forge - mod setup happens at the same time
as resource reloading!
Unfortunately there's not really a salvageable way of fixing this with
the current API. Forge now uses a registration event-based system,
meaning we can guarantee all modellers are loaded before models are
baked.
Historically we used Forge's SimpleChannel methods (and
PacketDistributor) to send the packets to the client. However, we don't
need to do that - it is sufficient to convert it to a vanilla packet,
and send the packet ourselves.
Given we need to do this on Fabric, it makes sense to do this on Forge
as well. This allows us to unify (and thus simplify) a lot of how packet
sending works.
At the same time, we also remove the handling of speaker audio during
decoding. We originally did this to avoid the additional copy of audio
data. However, this doesn't work on 1.20.4 (as packets aren't
encoded/decoded on singleplayer), so it makes sense to do this
Correctly(TM).
This also allows us to get rid of ClientNetworkContext.get(). We do
still need to service load this class (as Forge's networking isn't split
up in the same way Fabric's is), but we'll be able to drop that in
1.20.4.
Finally, we move the record playing code from ClientNetworkContext to
ClientPlatformHelper. This means the network context no longer needs to
be platform-specific!
After embarrassing, let's do some proper work.
Rather than passing the level and position each time we call
ComponentAccess.get(), we now pass them at construction time (in the
form of the BE). This makes the consuming code a little cleaner, and is
required for the NeoForge changes in 1.20.4.
Everything old is new again!
CC's network message implementation has gone through several iterations:
- Originally network messages were implemented with a single class,
which held an packet id/type and and opaque blobs of data (as
string/int/byte/NBT arrays), and a big switch statement to decode and
process this data.
- In 42d3901ee3, we split the messages
into different classes all inheriting from NetworkMessage - this bit
we've stuck with ever since.
Each packet had a `getId(): int` method, which returned the
discriminator for this packet.
- However, getId() was only used when registering the packet, not when
sending, and so in ce0685c31f we
removed it, just passing in a constant integer at registration
instead.
- In 53abe5e56e, we made some relatively
minor changes to make the code more multi-loader/split-source
friendly. However, this meant when we finally came to add Fabric
support (8152f19b6e), we had to
re-implement a lot of Forge's network code.
In 1.20.4, Forge moves to a system much closer to Fabric's (and indeed,
Minecraft's own CustomPacketPayload), and so it makes sense to adapt to
that now. As such, we:
- Add a new MessageType interface. This is implemented by the
loader-specific modules, and holds whatever information is needed to
register the packet (e.g. discriminator, reader function).
- Each NetworkMessage now has a type(): MessageType<?> function. This
is used by the Fabric networking code (and for NeoForge's on 1.20.4)
instead of a class lookup.
- NetworkMessages now creates/stores these MessageType<T>s (much like
we'd do for registries), and provides getters for the
clientbound/serverbound messages. Mod initialisers then call these
getters to register packets.
- For Forge, this is relatively unchanged. For Fabric, we now
`FabricPacket`s.
While ComputerFamily is still useful, there's definitely some places
where it adds an extra layer of indirection. This commit attempts to
clean up some places where we no longer need it.
- Remove ComputerFamily from AbstractComputerBlock. The only place this
was needed is in TurtleBlock, and that can be replaced with normal
Minecraft explosion resistence!
- Pass in the fuel limit to the turtle block entity, rather than
deriving it from current family.
- The turtle BERs now derive their model from the turtle's item, rather
than the turtle's family.
- When creating upgrade/overlay recipes, use the item's name, rather
than {pocket,turtle}_family. This means we can drop getFamily() from
IComputerItem (it is still needed on to handle the UI).
- We replace IComputerItem.withFamily with a method to change to a
different item of the same type. ComputerUpgradeRecipe no longer
takes a family, and instead just uses the result's item.
- Computer blocks now use the normal Block.asItem() to find their
corresponding item, rather than looking it up via family.
The above means we can remove all the family-based XyzItem.create(...)
methods, which have always felt a little ugly.
We still need ComputerFamily for a couple of things:
- Permission checks for command computers.
- Checks for mouse/colour support in ServerComputer.
- UI textures.
Previously we prevented our published full jar depending on any of the
other projects by excluding the whole cc.tweaked jar. However, as Cobalt
also now lives in that group, this meant we were missing the Cobalt
dependency.
Rather than specifying a wildcard, we now exclude the dependencies when
adding them to the project.
This commit adds abstract classes to describe the interface for our
mod-loader-specific generic peripherals (inventories, fluid storage,
item storage).
This offers several advantages:
- Javadoc to illuaminate conversion no longer needs the Forge project
(just core and common).
- Ensures we have a consistent interface between Forge and Fabric.
Note, this does /not/ implement fluid or energy storage for Fabric. We
probably could do fluid without issue, but not something worth doing
right now.
This adds a new "java_allocation" metric, which tracks the number of
bytes allocated while executing the computer (as measured by Java). This
is not an 100% reliable number, but hopefully gives some insight into
what computers are doing.
In practice, we're never going to change this to true by default. The
old Tekkit Legends pack enabled this[^1], and that caused a lot of
problems, though admittedly back in 2016 so things might be better now.
If people do want this functionality, it should be fairly easy to
replicate with a datapack, adding a file to rom/autorun.
[^1]: See https://www.computercraft.info/forums2/index.php?/topic/27663-
Hate that I remember this, why is this still in my brain?
Allows registering arbitrary block lookup functions instead of a
platform-specific capability. This is roughly what Fabric did before,
but generalised to also take an invalidation callback.
This callback is a little nasty - it needs to be a NonNullableConsumer
on Forge, but that class isn't available on Fabric. For now, we make the
lookup function (and thus the generic peripheral provider) generic on
some <T extends Runnable> type, then specialise that on the Forge side.
Hopefully we can clean this up when NeoForge reworks capabilities.
Previously we had the invariant that if we had a server monitor, we also
had a terminal. When a monitor shrank into a place, we deleted the
monitor, and then recreated it when a peripheral was requested.
As of ab785a0906 this has changed
slightly, and we now just delete the terminal (keeping the ServerMonitor
around). However, we didn't adjust the peripheral code accordingly,
meaning we didn't recreate the /terminal/ when a peripheral was
requested.
The fix for this is very simple - most of the rest of this commit is
some additional code for ensuring monitor invariants hold, so we can
write tests with a little more confidence.
I'm not 100% sold on this approach. It's tricky having a double layer of
nullable state (ServerMonitor, and then the terminal). However, I think
this is reasonable - the ServerMonitor is a reference to the multiblock,
and the Terminal is part of the multiblock's state.
Even after all the refactors, monitor code is still nastier than I'd
like :/.
Fixes#1608
We can't use FriendlyByte.readCollection to read to a
pre-allocated/array-backed NonNullList, as that doesn't implement
List.add. Instead, we just need to do a normal loop.
We add a couple of tests to round-trip our recipe specs. Unfortunately
we can't test the recipes themselves as our own registries aren't set
up, so this'll have to do for now.
This attempts to reduce some duplication in recipe serialisation (and
deserialisation) by moving the structure of a recipe (group, category,
ingredients, result) into seprate types.
- Add ShapedRecipeSpec and ShapelessRecipeSpec, which store the core
properties of shaped and shapeless recipes. There's a couple of
additional classes here for handling some of the other shared or
complex logic.
- These classes are now used by two new Custom{Shaped,Shapeless}Recipe
classes, which are (mostly) equivalent to Minecraft's
shaped/shapeless recipes, just with support for nbt in results.
- All the other similar recipes now inherit from these base classes,
which allows us to reuse a lot of this serialisation code. Alas, the
total code size has still gone up - maybe there's too much
abstraction here :).
- Mostly unrelated, but fix the skull recipes using the wrong UUID
format.
This allows us to remove our mixin for nbt in recipes (as we just use
our custom recipe now) and simplify serialisation a bit - hopefully
making the switch to codecs a little easier.
Rather than having a mess of lambdas, we now move the bulk of the
implemetation to their own methods. The lambdas now just do argument
extraction - it's all stringly typed, so good to keep that with the
argument definition.
This also removes a couple of exception keys (and thus their translation
keys) as we no longer use them.
I removed this in aa0d544bba, way back in
late 2021. Looks like it's been updating in the meantime and I hadn't
noticed, so add it back.
I've simplified the code a little bit, to make use of our new capability
helpers, but otherwise it's almost exactly the same :D.
- Split buttons.png into individual textures.
- Split corners_xyz.png into the following:
- borders_xyz.png: A nine-sliced texture of the computer borders.
- pocket_bottom_xyz.png: A horizontally 3-sliced texture of the
bottom part of a pocket computer.
- sidebar_xyz.png: A vertically 3-sliced texture of the computer
sidebar.
While not splitting the sliced textures into smaller ones may seem a
little odd, it's consistent with what vanilla does in 1.20.2, and I
think will make editing them easier than juggling 9 textures.
I do want to make this more data-driven in the future, but that will
have to wait until the changes in 1.20.2.
This also adds a tools/update-resources.py program, which performs this
transformation on a given resource pack.