The current design for OctoberProject is that the client produces mutations directed at their own entity which are then sent to the server. The server applies or rejects these mutations, potentially creating follow-up mutations (against the world or entities), then sends them back to all the clients with an incremented commit number from the original sender. The sender keeps its mutations in its "speculative projection", removing them once they are less than the commit number from the server or if they become invalid. This works well and allows for fairly concise descriptions of potentially complex changes across the wire while remaining highly flexible.
However, I realized that there is a problem when the clients don't have the entire data model. This is actually the common case, once the system gets beyond the current "tech demo" phase since the world is effectively infinite and there could be an effectively infinite number of entities within it while the client will only have loaded the world and entities directly around them.
While each mutation only changes a single block or entity, it can theoretically read anything. This means that filtering on the "write" side is trivial (if the client doesn't know about a block, don't send them mutations targeting it) but the "read" side can't reasonably be done (they may have the targeted block loaded but not the block next to it which is read by the mutation). This missing input data would mean that the client and server would come to different conclusions when executing the mutation (as "not loaded" is reasonable for the server - if the cuboid isn't loaded - it shouldn't be different on the client - where the cuboid just isn't in their subset of the world).
I think that there are 2 solutions to this: (1) Any data which is read, and known to not exist on the client, could be sent with the mutation so that the client can give the same results from read options or (2) We could stop sending mutations to the clients, instead only sending the actual write operations from them.
While (1) fits better within the current model, it is pretty complicated and may result in sending lots of data. (2) is probably the better answer, even though it means changing the stream of mutations from server to client into "update write" operations. This is a non-trivial change but is almost definitely conceptually simpler on the client and doesn't actually change much (in fact, this would probably still be realized as "mutations", just not the same ones the clients sent or scheduled - they still have the same rules, after all). Additionally, this means that the writes could be saturating within, or even across, logical server ticks (which is an interesting scaling possibility).
The data update would likely still be small (although how to detect and encode the meaning of the write might be tricky). Further, detecting the delta at the end of a tick shouldn't be too difficult as the entire data model is copy-on-write, so shallow instance comparisons handle the common case.
I am still thinking about this change and it won't be made until after the initial play test demo, as it is only required to support the infinite world which isn't yet integrated.
At least this project is providing some interesting technical challenges to consider,
Jeff.