Experimenting with stacking Spatial Poolers

It’s here assumed grid cell signals are part of self-location in each 2D map/module. Program calculated X,Y location variables for all body parts already exist, and we can start with these exact head/body angle and coordinates for body center and mouth. Some might call that an easy way to cheat (on a most baffling part) but we can say it is a “machine learning” enabled gift resulting in self-location super-powers, instead of only mortal cellular approximates like ours.

Starting with the standard animal cognition upper motor commands of Left/Right and Forward/Reverse further simplifies, while at the same time being true to biology. Going straight to text output would make a chatbot, not something new of possible interest to neuroscience.

The smallest (1D or) 2D map grid would represent a unique room, while the most detailed map further places itself inside the boundaries of objects to navigate around or over. Sparse data would be one bit per place, for mapping surfaces of solids to touch or bump into, in the 2D maps that stack into a 3D representation. Most all else in each map is empty space around the object, all 0’s. At first the representation would be mapped by a 2D flatland world view of invisible shock zones and invisible wall locations. Can later add bits for color and other properties.

Map geometry must contain (at center is intersection of surrounding 2D or 3D triangles) hexagonal places, but it may be possible that the exact geometry of the mapped data does not matter. In that case each Y location can be shifted one place radius to the right, from the previous, or use 6 cells per hexagonal place/column/subpopulation/group that senses and memorize navigational traveling waves received at its one input.

Less detailed maps would at some level fill remaining gaps seen in the most detailed. There is this way already an articulation mechanism where at the very tip the entire arena circle can be seen as one place. Pooling horizontally as well as vertically should generalize in a way that predicts a connected shape, based upon on a limited number of points. This would add something missing from behavior when using only one 2D map, causing it to have to bash into the wall everywhere before seeing itself fully enclosed.

One question would be (without adding code to instruct to do so) whether after bashing into the invisible walls enough times the virtual critter predicts the wall locations it didn’t bash into yet, and will (when not overly hungry) test its predictions/guesses/hypotheses by slowing down to pleasant bump for touching solid object surfaces at these locations. If true then the bit for that place gets (where necessary) set to 1 in all map layers, else nothing was really there and false prediction remains 0.

Jeff recently mentioned how he thinks this older part of the brain pertains to the later added neocortex:

If cortical columns repeat the same overall methodology in miniature then HTM spatial pooling can be expected to in some way work for both.

I wrote more here in regards to modeling having become easier, and new Torch code example to help get things started:

I’m hoping what I described makes better sense to you at the HTM coding level. Grid module signals became good clues for an underlying memory organization, where a machine learning approach may better demonstrate fundamental basics of how it works.

What is now most needed is the horizontal interconnection geometry of “grid” cell sized modules each 1.4 to 1.8 or so different in size from the next. Bitking?

1 Like