Micro/Macro SDRs?

http://neurosci.info/courses/vision2/Temporal/tanaka_2003_col-for-complex-objs.pdf

This is a very interesting paper I came across a while back that confirmed an idea I had about topological activity.

This seems to suggest that, at least in associative cortices, activity isn’t just sparse on the neuron level, but also on the column level. In other words, much like how individual minicolumns learn to represent specific patterns, individual macrocolumns appear to learn to represent specific classes of objects.

I have a lot more ideas on this subject, though I’m a bit busy at the moment. I’ll add to this when I have time. For now, I’ll leave it open for some discussion.

4 Likes

So, some thoughts:

This seems to suggest that the idea that Numenta has about displacement cells is not entirely accurate. It does not appear as though individual macrocolumns are modelling complex objects. Rather, compound objects seem to be implemented by ensembles of active macrocolumns.

Numenta’s current understanding of displacement cells and object composition, as I understand it, suggests that a composite object is represented in the cortex as a union of SDRs for each of the subcomponents, and their relative locations. The exact mechanisms for this don’t seem to be figured out yet, other than just that there’s unions involved.

On the other hand, the paper I linked suggests that different subobjects are modelled by separate macrocolumns.

Rather than every macrocolumn modelling everything in parallel, different macrocolumns appear to specialize, creating these “Macro SDRs” of sparse macrocolumn-level activity. Compound objects are still represented by unions, but these unions exist on the macrocolumn level, not within a macrocolumn.

Say you look at a coffee mug. The cup part of the mug may be represented by one set of active macrocolumns, and the handle may be represented by a separate set of active macrocolumns. If there is a logo on the cup, another set of macrocolumns will be modelling it. If rather than looking at a mug with that logo, you instead look at the logo on a piece of paper, the macrocolumns that represented the logo on the mug will still be active, but the other macrocolumns will be replaced by a set of macrocolumns specialized to modelling the paper.

This is exactly what was observed in the paper above.

It seems to me that there’s likely a lot of interesting dynamics that emerge from HTM on large scales, across a large number of macrocolumns, that Numenta is currently ignoring. I understand that there are computational limits on simulations on that scale, but I can’t help but think that a combination of extrapolating properties of smaller-scale simulations, and looking at the neuroscience more (like this paper) would bring research into a useful direction.

1 Like

Are you referring to macrocolumns or minicolumns or both here? Perhaps you can edit to clarify.

1 Like

Fixed. I’m mostly talking about effects across macrocolumns here.

In other words, I’m saying that the effects that Numenta thinks are going on on the minicolumn level is actually going on instead on the macrocolumn level, at least in terms of modelling complex objects.

This also would get rid of any need for some kind of mechanism to link SDRs in a union to grid cell locations in a union, which seems to be the biggest leap necessary to make the standard idea of displacement cells work. If the columns are modelling subobjects or features separately, they can track the location on their own, no unions necessary within a column.

Voting mechanisms between columns could easily allow for different objects to share data. If you pick up a coffee mug by the handle, you expect the rest of it to move along with it, and you need some way for those columns to communicate that to each other.

The only part that isn’t obvious to me is what mechanisms cause columns to specialize, though clearly such a mechanism exists, or else we’d be seeing very different macrocolumn-level activity patterns in the cortex.

From a resource requirements point of view (synapses), seems the right thing to do. “Reusing” the same set of resources in many context, saves a lot of resources. The voting idea seems the opposite.

My hunch is that the multiple representation (plus voting) of the same “sequence/object” in different mini-columns is because of fault/noise tolerance (including stochastic behaviour of LTP/LTD).