rootbeer23 wrote:no adjacent areas are processed in parallel. you divide the map into horizontal zones and in the first pass you simulate
all even zones, in the second pass you simulate all odd zones.
processing order is strict within a zone (as a precondition), interaction between units of adjacent zones is in sequence, because adjacent zones are not simulated in parallel.
The concept of adjacency is why this is far more complex than it first appears. You must use both axis when separating the nodes, or else there *will* be edge race conditions at the extreme edges. That means that there are 8 nodes adjacent to every other node:
000
010
000
So, assuming a 10x10 grid for simplicity, the first pass would look like:
0000000000
0101010101
0000000000
0101010101
0000000000
0101010101
0000000000
0101010101
0000000000
0101010101
And the second pass:
0101010101
0000000000
0101010101
0000000000
0101010101
0000000000
0101010101
0000000000
0101010101
0000000000
...And then the 3rd pass:
1010101010
0000000000
1010101010
0000000000
1010101010
0000000000
1010101010
0000000000
1010101010
0000000000
And so on until every node is updated. Assuming that you aren't doing this all in one tick, that is a *lot* of added latency in an engine where latency is already an issue. Furthermore, there would be an obvious "stepping" with units in one zone being updated sooner than those in the adjacent zone. This could of course be covered up by interpolating between a set of timestamped prior states in each unit in order to maintain visual temporal coherency for the player, but the displayed state would necessarily dramatically differ from the internal positions of the units as it is no longer one tick behind, but 1-N ticks behind where N is the number of passes needed to process all nodes without adjacency.
Also note that you still have to account for literal corner cases. Remember that planets are spherical. Assuming that they are generated by taking the 6 faces of a cube and projecting them to a sphere, each node field will have four adjacent node fields which will then need to communicate with each other.
rootbeer23 wrote:you can enforce access rules (like for example a tank in an even zone can only query the position of a tank in an odd zone or its own zone etc.) in the lua interpreter, or whatever your scripting language would be. you need this only in a debug build. this kind of unallowed access is trivial to detect, since each entity is associated with a zone.
that comes in addition to the fact that the target selection function for a tank will ignore targets that are out of range naturally.
So now there are limitations on what you can only query based on an arbitrary spatial location? As a rule of thumb, if a feature only works some of the time, it may as well not be there. Therefore, this would break a *lot* of functionality, or worse, it would only function inconsistently and for reasons that are clear only to the developer of the spatial division system.
rootbeer23 wrote:and correctness is a priori easier to ensure than for the precondition: that is making a single thread deterministic.
that is because the spatial independence is not a thing that you have to impose on the code, its a natural property of an RTS.
It's only easier if you are willing to accept the staggering limitations that this method imposes on inter-unit communication, as well as the fact that it will be extremely unoptimized in the very cases that *need* optimization, which is during large scale unit engagements that will tend to occur near the same place. As well as the myriad of subtle edge cases that would need to be handled in order for it to function deterministically.
I do not see any advantages whatsoever over a simple tasklet wrapper utilizing the update-cache-finalize pattern.