It seemed like you right away noticed the “3Dness of the environment and perception” that V1 senses is different from the downstream “2Dness of the worldview” but I was not fully sure. In either case I needed to explain more about the model I’m developing. To help everyone out I went into additional detail about the V1 related things in your thread, instead of mine for the other end of the cortical sheet.
I can add that while trying different signaling rules I have in the past formed what looked like ocular dominance columns, exactly two places wide, where each was the opposite state of the other(s). I did not experiment much with it but was like a (cortical signal only no retinal input) 2D environment version of V1 where for a 3D environment a range of angles would exist in between the two opposite states, extremes. Signal wise it was at least a stable signal geometry for a blurry forest with way more connections than necessary to prune down to. HTM has that type of sparsing process in it. I now wonder what kind of traveling waves might have been produced by throwing in some retinal signals, but I doubt I saved a copy. It is though something worth mentioning I thought of as a possible clue for modeling V1 traveling waves. In that case you would look for rules to sort out the chaos going on in a newborn V1 and from I recall that kind of signal jitter was included to force the network to settle to the most stable geometry, instead of whatever it right away would settle to then stay that way.
The model I now have uses the rules for mapping and navigation, but there are other ways to use the rules than that. Changing the rules that each place uses in a given area of the brain may work for modeling the entire cortical sheet. I sense that in the best case scenario there will be much like the wheel example a reinventing of HTM theory. Matt’s new visual aid should have the same or very similar variables to work from, and be as much or more useful than before.
Best way I know of to get a sense of the network behavior is try everything possible, including signal thrust/radiation pattern to favor pairing or other geometry, see what happens. Starting with a V1 model for a 2D environment instead of 3D will greatly reduce the possibilities, while still containing edges of lines. In flatland only one point of the edge line is seen unless exactly across the 2D plane in which case it’s a like wall of light at all points along it through that portion of its world. It’s similar to a “slice” but has 0 thickness. Two eyes with no 3D intermediate angles should only need the seen before 2 wide ocular dominance column structure. When eyes see nothing the network goes quiet. When something brightly moves by the (by signaling like an Attractor) make waves that travel at least along the length of each dominance column to V2, time of arrival can be expected to influence what at that point ends up drawn out as a traveling wave where information from both eyes are combined.
Starting off with a stable pattern makes like the surface of a pond and what seem like canals feeding waves into one. It’s the sort of information stream HTM cells were made for, where in this case the straw does not have to move.