SDR theoretical properties and HTM

I have read the foundation papers listed in the Numenta.com site. Some HTM papers draw heavily from the SDR theory. Some papers focus on the theoretical properties of SDRs and sparse representation.

Throughout this forum, I see repeated references to the properties of SDRs and the assumption that HTM automatically gains from all the possible advantages of SDRs.

Here is where think that this is wrong: the biology does not work that way.

  1. In most HTM models the cell bodies can make connections with any input bit without restrictions. The does allow an HTM model to gain some of the advantages of SDRs but NOT a global union property unless some outside agent examines the connections.
  2. In pyramidal cells, the SDR is actually formed by the dendrites emanating from the various regions (proximal or apical) They pass by a fixed set of axonal projections and columns and certainly extend no more than 8 mini-columns distant from the cell body. This makes the SDR interactions local to the cell body.
  3. I cannot see how a global SDR union can be formed by this biological arrangement. I have never seen any mechanism like this in the brain. The closest it gets is if a dendrite has two or more learned SDRs. Partial activation of each of the learned SDR synapses could sum to firing potential. This would be local to the single dendrite with the input feature space limited to the small set of minicolumns/axon bundles that dendrite can reach.
  4. If a given cell body is to form a relationship between features the encoder will have to distribute the feature bits to a wide variety of receptive fields to allow the mini-columns to sample these features. This is a complication that is usually ignored in encoder creation. This also means that when examining what is going on in the biology this important property is overlooked.

I see that breaking away from the biology makes it easier to work with. For example - allowing the reach of a cell to be the entire input field makes the encoder task easier as you don’t have to worry about the spatial distribution density of the encoded values. This is what I often refer to as “well shuffled” data.

I have to balance that with the guiding principle that making models of the biology informs research into the biology and findings in the biology informs modelmaking. I read many papers on cell biology and over and over I find myself thinking - HTM does not work that way. The restricted geographic scope of both the mini-column and encoder means that there are important biological features that cascade to computational impacts.

An example - the original ANNs were simple summation/threshold units. Minsky showed in the book “perceptrons” that this had important theoretical limits. The first AI winter was the result. When researchers recognized that cells saturated and were arranged in layers these theoretical limits fell away. The PDP books were some of the first to show that by adding the sigma response curve and layers the models were vastly more powerful.

Lest you think I am bagging on HTM - NO - I think that HTM and the Deep Leabra models offer the strongest way forward in the search for a platform to deliver strong AI.

I am a firm believer in the Numenta published philosophy of faithfully reverse engineering the brain. I am encouraging other HTM researchers to spend some time learning how the biology works and try to gain from the lessons nature has to offer. When it comes to making a functional intelligence the brain is the only system around that has the "been there, done that” tee-shirt.

4 Likes

Within all our simulated layers, in all our papers, we use global inhibition. I think this is what you are referring to. We have a mechanism for local inhibition the SP. If you are talking about something else, please clarify.

Unions will work locally within the confines of the cortical column, even if their properties are not useful from the outside. Another way I think about this is that each cortical column has an “internal representation”, or a way it labels things for itself. Some layer output are used for local computations, so their SDRs semantics don’t make sense from the outside. For example, in our two layer object recognition models, the activations in the lower layer are not useful outside the column. But the activations in the pooling layer are useful.

This reads like more support for #1 above. You can enforce local inhibition in NuPIC, and it must be applied to properly process any topological input.

If you are again referring to use pervasive use of global inhibition in all our published code, this is a moot point.

I have been arguing that the encoder space is going to be huge for years now. Proper topological encoding with an SP set up with proper local inhibition is a place ripe for innovation, IMO.

This hurts, Mark. I have spent an awful lot of time over the past few years reading neuroscience papers and trying to comprehend the biology on a level that I can keep up with our research team. I can tell you from my ringside seat they also spend an awful lot of their time “learning how the biology works”.

As a fellow hobby scientist searching for the same answers I am, I expected you to realize that people can see the same evidence and draw different conclusions.

4 Likes

Thank you for you thoughtful feedback.

I will be traveling to/from Uganda next week. I will print this out and meditate on your answer on the 30 hour trips. It may take some time to digest this and maybe rearrange some mental furniture.

I will toss back to you that I do see people on the forum assert that SDRs are global and that SDR math can extract out features on that global scale. Do you see that as the correct way to interpret the intersection of HTM and SDR theories?

I should add that this criticism is not specifically addressed at Numenta but to the fellow participants of the forum. I have been thinking this for a while but some recent posts brought me to offer this thread. Nementa has been careful to respect biological plausibility and I would offer that this should be a guiding principle in HTM work. This does impose the barrier that the HTM researcher spend some time learning the basics of the biological underpinnings of the cortex.

2 Likes

I see people on the forum throw out a lot of ideas, I’m sure this has been discussed. But the Thousand Brains Theory does not require this idea at all, in fact it presents an opposing concept, that local computations are much more important than originally suspected.

2 Likes

No hard feelings. We try really hard not to say too much when theorizing about what might be happening at high levels. Saying something wrong is more damaging that holding off until we understand more. To think at this global level you have to think about hierarchy, and we have said very little about hierarchy in our papers (except that it is not as important as we believe for object recognition).

2 Likes

Since you bring up the global level and hierarchy, I am very interested in this high-level representations.

I have asked this before and did not get any response.

Assume that it’s a given that much object recognition is done at the macro-column level exactly as proposed by the 1000 brain model.

The map-2-map connections have been mapped out and there does seem to be a theme of paths splitting, skipping, and recombining.

What is happening as the representation courses up what is usually called the hierarchy?

Does Numenta have a theory on this that helps shape understanding of the environment that the 1000 brain model is part of?

An example - plugging in the 1000 brain model in to the known visual path from V1 forward - there are well known learning and activation patterns that result from visual stimulation. Has the 1000 brain model progressed where it can explain these observed patterns?

The “three streams” paper applied the Deep Leabra model to this problem with some success.

It’s really, really messy. It is a valid question to ask, but we’re not asking it. We’re saying it doesn’t matter as much as we thought.

Are you talking about the H&W orientation bands? If so, I have responded to that already.

No. I am talking about the more complex higher level representation such as textures and contours.

And these transformations are only one map away. There are at least 4 layers between V1 and the central association region. They are doing something. Example:

I like the portrayal in the “three streams” paper better but I don’t have it on my phone.

I know of at least one map that is critical to processing vision to “words” between the association area and the temporal lobe.
The take-away from that one is some sort of mapping or transformation in happening in that map.

What transformation is performed by a map full of 1000 brains columns?

Honestly I don’t know how this fits into the higher level theory you are coming from. We are looking at this problem from a different angle.