- Add a new computercraft:turtle_overlay dynamic registry, which stores
turtle overlays. Turtle overlays are just a model id and an
(optional) boolean flag, which specifies whether this overlay is
compatible with the elf/christmas model.
- Change the computercraft:overlay component to accept a
Holder<TurtleOverlay> (instead of just a model ID). This accepts both
an overlay ID or an inline overlay object (e.g. you can do
cc:turtle_normal[computercraft:overlay={model:"foo"}].
- Update turtle model and BE rendering code to render both the overlay
and elf (if compatible). Fixes#1663.
- Ideally we'd automatically load all models listed in the overlay
registry. However, resource loading happens separately to datapacks,
so we can't link the two.
Instead, we add a new assets/computercraft/extra_models.json file
that lists any additional models that should be loaded and baked.
This file includes all built-in overlay models, but external resource
packs and/or mods can easily extend it.
API Changes:
- Minecraft had updated ModelResourceLocation to no longer inherit from
ResourceLocation.
To allow referencing both already baked models
(ModelResourceLocation) and loading new models (via ResourceLocation)
in turtle model loadders, we add a new "ModelLocation" class, that
acts as a union between the two.
I'm not entirely convinced by the design here, so might end up
changing again before a stable release.o
- Merge IMedia.getAudioTitle and IMedia.getAudio into a single
IMedia.getAudio method, which now returns a JukeboxSong rather than a
SoundEvent.
Other update notes:
- Minecraft had rewritten how buffers are managed again. This is a
fairly minor change for us (vertex -> addVertex, normal -> setNormal,
etc...), with the exception that you can no longer use
MultiBufferSource.immediate with the tesselator.
I've replaced this with GuiGraphics.bufferSource, which appears to be
fine, but worth keeping an eye on in case there's any odd render
state issues.
- Crafting now uses a CraftingInput (a list of items) rather than a
CraftingContainer, which allows us to simplify turtle crafting code.
Rather than having one single hard-coded recipe, we now have separate
recipes for printed pages and printed books. These recipes are defined
in terms of
- A list of ingredients (like shapeless recipes).
- A result item.
- An ingredient defining the acceptable page items (so printed page(s),
but not books). This cannot overlap with any of the main ingredients.
- The minimum number of printouts required.
We then override the shapeless recipe crafting logic to allow for
multiple printouts to appear.
It feels like it'd be nice to generalise this to a way of defining
shapeless recipes with variable-count ingredients (for instance, the
disk recipe could also be defined this way), but I don't think it's
worth it right now.
This solves some of the issues in #1755. Disk recipes have not been
changed yet.
- Update EMI and REI integration, and fix some issues with the upgrade
crafting hooks.
- Just use smooth stone for recipes, not #c:stone. We're mirroring
redstone's crafting recipes here.
- Some cleanup to printouts.
- Remote upgrade data generators - these can be replaced with the
standard registry data generators.
- Remove the API's PlatformHelper - we no longer have any
platform-specific code in the API.
- Use TinyRemapper to remap mixins on Fabric. Mixins in the common
project weren't being remapped correctly.
- Update to latest NeoForge
- Switch to the new tick events.
- Call refreshDimensions() in the fake player constructor.
Ever since 1.17, turtle and pocket upgrades have been loaded from
datpacks, rather than being hard-coded in Java. However, upgrades have
always been stored in our own registry-like structure, rather than using
vanilla's registries.
This has become a bit of a problem with the introduction of components.
The upgrade components now hold the upgrade object (rather than just its
id), which means we need the upgrades to be available much earlier (e.g.
when reading recipes).
The easiest fix here is to store upgrades in proper registries instead.
This means that upgrades can no longer be reloaded (it requires a world
restart), but otherwise is much nicer:
- UpgradeData now stores a Holder<T> rather than a T.
- UpgradeSerialiser has been renamed to UpgradeType. This now just
provides a Codec<T>, rather than JSON and network reading/writing
functions.
- Upgrade classes no longer implement getUpgradeID(), but instead have
a getType() function, which returns the associated UpgradeType.
- Upgrades are now stored in turtle_upgrade (or pocket_upgrade) rather
than turtle_upgrades (or pocket_upgrades). This will break existing
datapacks, sorry!
This adds a new "recipe function" system, that allows transforming the
result of a recipe according to some datapack-defined function.
Currently, we only provide one function: computercraft:copy_components,
which copies components from one of the ingredients to the result. This
allows us to replace several of our existing recipes:
- Turtle overlay recipes are now defined as a normal shapeless recipe
that copies all (non-overlay) components from the input turtle.
- Computer conversion recipes (e.g. computer -> turtle, normal ->
advanced) copy all components from the input computer to the result.
This is more complex (and thus more code), but also a little more
flexible, which hopefully is useful for someone :).
In 1.20.1, Forge and Fabric have different "common" tag conventions (for
instance, Forge uses forge:dusts/redstone, while Fabric uses
c:redstone_dusts). This means the generated recipes (and advancements)
will be different for the two loader projects. As such, we run data
generators for each loader, and store the results separately.
However, aside from some recipes and advancements, most resources /are/
the same between the two. This means we end up with a lot of duplicate
files, which make the diff even harder to read. This gets worse in
1.20.5, when NeoForge and Fabric have (largely) unified their tag names.
This commit now merges the generated resources of the two loaders,
moving shared files to the common project.
- Add a new MergeTrees command, to handle the de-duplication of files.
- Change the existing runData tasks to write to
build/generatedResources.
- Add a new :common:runData task, that reads from the
build/generatedResources folder and writes to the per-project
src/generated/resources.
NF now loads mods from neoforge.mods.toml rather than mods.toml, so CC
wasn't actually being loaded. Tests all passed, because they didn't get
run in the first place!
This tells Create that modems will pop-off if their neighbour is moved,
and so changes the order that the block is moved in.
We possibly should use BlockMovementChecks.AttachedCheck instead, to
properly handle the direction modems are facing in. However, this
doesn't appear to be part of the public API, so probably best avoided.
Fixes#948
This was added in 4675583e1c to handle
Forge on longer supporting RUN_COMMAND for client-side commands.
However, the mixins are still present on NF/1.20.4, so we don't need
this!
The two mod loaders expose different methods for this (Forge's method
takes a ItemPropertyFunction, Fabric's a ClampedItemPropertyFunction).
This is fine in a Gradle build, as the methods are compatible. However,
when running from IntelliJ, we get crashes as the common code tries to
reference the wrong method.
We now pass in the method reference instead, ensuring we use the right
method on each loader.
BYTECODE WAS NOT SUPPOSED TO BE REWRITTEN
YEARS OF DEBUGGING REMAPPING FAILURES yet NO ACTUAL SOLUTION FOUND.
Wanted to use Mixins for anyway for a laugh? We had a tool for that: it
was called "FABRIC LOOM".
"Yes, please produce completely broken jars for no discernable reason"
Statements dreamed up by the utterly Deranged.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
This removes our two mixins used on Forge:
- Breaking progress for cabled/wired modems.
- Running client commands from chat click events. We now suggest the
command on Forge instead.
Occasionally we get issues where the mixin annotation processor doesn't
write its tsrg file in time for the reobfJar/reobfJarJar task. I thought
we'd fixed that cb8e06af2a, but sometimes
we still produce missing jars - I have a feeling this might be to do
with incremental compilation.
We can maybe re-evaluate this on 1.20.4, where we don't need to worry
about remapping any more.
This should never happen, but apparently it does!? We now log an error
(rather than crashing), and include the original BE (and associated
block), as the BE type isn't very useful.
See #1750. Technically this fixes it, but want to do some more poking
there first.
This adds support for computer selectors, in the style of entity
selectors. The long-term goal here is to replace our existing ad-hoc
selectors. However, to aid migration, we currently support both - the
previous one will most likely be removed in MC 1.21.
Computer selectors take the form @c[<key>=<value>,...]. Currently we
support filtering by id, instance id, label, family (as before) and
distance from the player (new!). The code also supports computers within
a bounding box, but there's no parsing support for that yet.
This commit also (finally) documents the /computercraft command. Well,
sort of - it's definitely not my best word, but I couldn't find better
words.
Minecraft sometimes keeps chunks in-memory, but not actively loaded. If
we schedule a block entity to be ticked and that chunk is is then
transitioned to this partially-loaded state, then the block entity is
never actually ticked.
This is most visible with monitors. When a monitor's contents changes,
if the monitor is not already marked as changed, we set it as changed
and schedule a tick (see ServerMonitor). However, if the tick is
dropped, we don't clear the changed flag, meaning subsequent changes
don't requeue the monitor to be ticked, and so the monitor is never
updated.
We fix this by maintaining a list of block entities whose tick was
dropped. If these block entities (or rather their owning chunk) is ever
re-loaded, then we reschedule them to be ticked.
An alternative approach here would be to add the scheduled tick directly
to the LevelChunk. However, getting hold of the LevelChunk for unloaded
blocks is quiet nasty, so I think best avoided.
Fixes#1146. Fixes#1560 - I believe the second one is a duplicate, and
I noticed too late :D.
- Mark our core test-fixtures jar as part of the "cctest", rather than
a separate library. I'm fairly sure this was actually using the
classpath version of CC rather than the legacyClasspath version!
- Add a new "testMinecraftLibrary" configuration, instead of trying to
infer it from the classpath. We have to jump through some hoops to
avoid having multiple versions of a library on the classpath at once,
but it's not too bad.
I'm working on a patch to bsl which might allow us to kill of
legacyClasspath instead. Please, anything is better than this.
Forge doesn't run client-side commands from sendUnsignedCommand, so we
still require a mixin there.
We do need to change the command name, as Fabric doesn't properly merge
the two command trees.
Rather than mixing-in to CachedOutput, we just wrap our DataProviders to
use a custom CachedOutput which reformats the JSON before writing. This
allows us to drop mixins for common+non-client code.
Originally we exposed a single registerTurtleUpgradeModellermethod which
could be called from both Fabric (during a mod's client init) and Forge
(during FMLClientSetupEvent).
This was fine until we allowed upgrades to specify model dependencies,
which would then automatically loaded, as this means model loading now
depends on upgrade modellers being loaded. Unknown to me, this is not
guaranteed to be the case on Forge - mod setup happens at the same time
as resource reloading!
Unfortunately there's not really a salvageable way of fixing this with
the current API. Forge now uses a registration event-based system,
meaning we can guarantee all modellers are loaded before models are
baked.
Historically we used Forge's SimpleChannel methods (and
PacketDistributor) to send the packets to the client. However, we don't
need to do that - it is sufficient to convert it to a vanilla packet,
and send the packet ourselves.
Given we need to do this on Fabric, it makes sense to do this on Forge
as well. This allows us to unify (and thus simplify) a lot of how packet
sending works.
At the same time, we also remove the handling of speaker audio during
decoding. We originally did this to avoid the additional copy of audio
data. However, this doesn't work on 1.20.4 (as packets aren't
encoded/decoded on singleplayer), so it makes sense to do this
Correctly(TM).
This also allows us to get rid of ClientNetworkContext.get(). We do
still need to service load this class (as Forge's networking isn't split
up in the same way Fabric's is), but we'll be able to drop that in
1.20.4.
Finally, we move the record playing code from ClientNetworkContext to
ClientPlatformHelper. This means the network context no longer needs to
be platform-specific!
After embarrassing, let's do some proper work.
Rather than passing the level and position each time we call
ComponentAccess.get(), we now pass them at construction time (in the
form of the BE). This makes the consuming code a little cleaner, and is
required for the NeoForge changes in 1.20.4.
Everything old is new again!
CC's network message implementation has gone through several iterations:
- Originally network messages were implemented with a single class,
which held an packet id/type and and opaque blobs of data (as
string/int/byte/NBT arrays), and a big switch statement to decode and
process this data.
- In 42d3901ee3, we split the messages
into different classes all inheriting from NetworkMessage - this bit
we've stuck with ever since.
Each packet had a `getId(): int` method, which returned the
discriminator for this packet.
- However, getId() was only used when registering the packet, not when
sending, and so in ce0685c31f we
removed it, just passing in a constant integer at registration
instead.
- In 53abe5e56e, we made some relatively
minor changes to make the code more multi-loader/split-source
friendly. However, this meant when we finally came to add Fabric
support (8152f19b6e), we had to
re-implement a lot of Forge's network code.
In 1.20.4, Forge moves to a system much closer to Fabric's (and indeed,
Minecraft's own CustomPacketPayload), and so it makes sense to adapt to
that now. As such, we:
- Add a new MessageType interface. This is implemented by the
loader-specific modules, and holds whatever information is needed to
register the packet (e.g. discriminator, reader function).
- Each NetworkMessage now has a type(): MessageType<?> function. This
is used by the Fabric networking code (and for NeoForge's on 1.20.4)
instead of a class lookup.
- NetworkMessages now creates/stores these MessageType<T>s (much like
we'd do for registries), and provides getters for the
clientbound/serverbound messages. Mod initialisers then call these
getters to register packets.
- For Forge, this is relatively unchanged. For Fabric, we now
`FabricPacket`s.
While ComputerFamily is still useful, there's definitely some places
where it adds an extra layer of indirection. This commit attempts to
clean up some places where we no longer need it.
- Remove ComputerFamily from AbstractComputerBlock. The only place this
was needed is in TurtleBlock, and that can be replaced with normal
Minecraft explosion resistence!
- Pass in the fuel limit to the turtle block entity, rather than
deriving it from current family.
- The turtle BERs now derive their model from the turtle's item, rather
than the turtle's family.
- When creating upgrade/overlay recipes, use the item's name, rather
than {pocket,turtle}_family. This means we can drop getFamily() from
IComputerItem (it is still needed on to handle the UI).
- We replace IComputerItem.withFamily with a method to change to a
different item of the same type. ComputerUpgradeRecipe no longer
takes a family, and instead just uses the result's item.
- Computer blocks now use the normal Block.asItem() to find their
corresponding item, rather than looking it up via family.
The above means we can remove all the family-based XyzItem.create(...)
methods, which have always felt a little ugly.
We still need ComputerFamily for a couple of things:
- Permission checks for command computers.
- Checks for mouse/colour support in ServerComputer.
- UI textures.
- Add a check to ensure declared dependencies in the :core project, and
those inherited from Minecraft are the same.
- Compute the next Cobalt version, rather than specifying it manually.
- Add the gradle versions plugin (and version catalog update), and
update some versions.
Previously we prevented our published full jar depending on any of the
other projects by excluding the whole cc.tweaked jar. However, as Cobalt
also now lives in that group, this meant we were missing the Cobalt
dependency.
Rather than specifying a wildcard, we now exclude the dependencies when
adding them to the project.
This commit adds abstract classes to describe the interface for our
mod-loader-specific generic peripherals (inventories, fluid storage,
item storage).
This offers several advantages:
- Javadoc to illuaminate conversion no longer needs the Forge project
(just core and common).
- Ensures we have a consistent interface between Forge and Fabric.
Note, this does /not/ implement fluid or energy storage for Fabric. We
probably could do fluid without issue, but not something worth doing
right now.
This adds a new "java_allocation" metric, which tracks the number of
bytes allocated while executing the computer (as measured by Java). This
is not an 100% reliable number, but hopefully gives some insight into
what computers are doing.
In practice, we're never going to change this to true by default. The
old Tekkit Legends pack enabled this[^1], and that caused a lot of
problems, though admittedly back in 2016 so things might be better now.
If people do want this functionality, it should be fairly easy to
replicate with a datapack, adding a file to rom/autorun.
[^1]: See https://www.computercraft.info/forums2/index.php?/topic/27663-
Hate that I remember this, why is this still in my brain?
Allows registering arbitrary block lookup functions instead of a
platform-specific capability. This is roughly what Fabric did before,
but generalised to also take an invalidation callback.
This callback is a little nasty - it needs to be a NonNullableConsumer
on Forge, but that class isn't available on Fabric. For now, we make the
lookup function (and thus the generic peripheral provider) generic on
some <T extends Runnable> type, then specialise that on the Forge side.
Hopefully we can clean this up when NeoForge reworks capabilities.