Some animations explaining columns and SP

I’ve been working on some animations explaining how HTM works in a simple way. Focuses on the implementation of concepts in HTM, not the general concepts like why we use the SP and TM and TP or where the theory comes from. I’m looking for feedback about inaccuracies in the columns and ugliness in the animation, like if you think something is misleading as shown or could be shown better.

The animation is made with a python library based on Pyglet/OpenGL that I wrote called Paper. I wrote it for this purpose, but it could be used for other simple educational and explanatory animations, so I put it on Github.

The two animations are (1) HTM columns and (2) The spatial pooler, without boosting and using global inhibition.
HTM Columns
HTM Spatial Pooler

Let me know what you think!
Sam

4 Likes

Sam, this is great stuff. I’m also starting to think about SP visualizations. I like the direction you are taking. I hope you don’t mind if I end up taking some design ideas for my work?

Also, since you are using the terms “potential pool” and “receptive field”…

This topic recently came up in a meeting, and I think @subutai and I agree that there is a subtle different in these terms.

  • potential pool: Possible connections from a mini-column in the SP to an input space. Could be represented as a list of integer indices of the input space.
  • receptive field: Within the potential pool, the synapses with permanence values above a connection threshold. Activity within the receptive field causes the mini-column to become active.
2 Likes

Interesting, I hadn’t picked up on this definition distinction before (I have used the two terms interchangeably in the past). Another way to think of the distinction, then, would be that the potential pool is static, while the receptive field changes over time with SP learning.

1 Like

That’s funny, I always thought the receptive field was the range of spaces from which the potential pool is chosen, i.e. a radius from the column center in the input space that can be used to select potential connections. I couldn’t say now where I got that idea haha, I’ll have to update my thinking and animations

This exactly lines up with my thinking also, and that’s how my code has used the terms in function/variable names. So basically the receptive field size is static, but the inhibition radius for each minicolumn is computed from the “average connected receptive field radius”, which varies in numeric size since new synapses become connected over time, while others are disconnected throughout the minicolumns. I think these ideas originate from the first white paper ever published that included pseudocode?

Although, now that I think about it more, maybe the distinction is useful when considering each cell in a minicolumn might share the exact same “potential pool” of feedforward inputs, and they have similar, but not identical, receptive fields within that pool.

If there is topology, mini-columns that are “far away” from each other might have completely non-overlapping potential pools, therefore entirely different receptive fields.

If there is global inhibition, one mini-column’s potential pool might be randomly the same as another (very low probability of that, but true). Even if that happens, as the SP learns each column will narrow in on particular features, and the receptive fields will change and focus in on specific features that mini-columns is receptive to.