Animation of spatial pooler and temporal memory



Just another visualization, except this one deviates from the standard HTM model. I wanted to make this visualization to see how cells work together without column structure.

The field on the left are the ‘spatial’ cells that connect directly to the input cells that surround the field. The field on the right are the ‘temporal’ cells that connect to the spatial cells. The blue box surrounding the active cells are their neighbors. Cells can only interact with other cells within their neighborhood. The temporal cells have the same neighborhood as the spatial cells, just like as if they were layered. So temporal cells can only connect to spatial cells within their neighborhood.

Each spatial cell has a number of segments that represent a union of various input patterns. Due to Hebbian learning the spatial cells gradually become tuned to specific patterns, similar to SP but without the column structure.

Every cell functions locally and learns due to local activity. Every cell uses STDP. The only difference between the cells is who they connect to. The spatial cells connect to other spatial cells to form temporal memory. (However I’ve removed temporal memory for now just to watch temporal pooling on the right field). The spatial and temporal cells function the same but operate differently because spatial connects to input, and temporal connects to spatial.

The temporal cells gradually pool the spatial cells, and what I like is that the pooling is distributed because of local fields.


The spatial cells connect to other spatial cells within their neighborhoods that were active just previously. Although this is first order memory this might not be an issue for this model due to feedback depolarization from a higher region via this regions temporal cells. Hopefully I can visualize that process one day.

In this case the first order depolarized cells are colored pink. They are not a union inference in this one because this is just a single 4-step repeating sequence after learning (notice the input connections are fairly specific after learning - STDP).

I’m not sure if these deviant experiments are of interest to anyone. I just love doing this, everything about it. And I enjoy sharing it. Hopefully these visualizations and ideas maybe useful to someone.


I personally think it’s great! For HTM success, I think we need all kind of tools because, you know, we’re all humans here so even if it’s pure mathematic and logic and data analysis, we still need something vivid, beautiful and cool.


Upon rereading @jhawkins On Intelligence book again I’ve decided to try and visualize the learning and inference mechanism in all 6 cortical layers. Along with that I want to visualize a small hierarchy to show the HTM in action.

Although the pseudocode for layers 2/3, 5 and 6 are not yet published by NuPIC, I will try to implement them myself based upon the available theory from On Intelligence and HTM. So I’d really appreciate any feedback as I progress.

I will be hosting the progress of the visualizations and theory here. I’ve began the write up of the theory here. Again, any feedback on the theory is very welcome.

My hope is to produce a visualization that gives an intuitive sense of how the cortex works, almost like watching the basic processes invivo. For visual people like me, its very useful.


Welcome to the club :slight_smile: and good luck on the slightly ambitious attempt