How efficient would be to use a recurrent SP?

Using a recurrent version of the spatial pooler (recieving the previous output as a part of the new input) sounds like an obvious potential as a sequence classifier, so I thought probably someone in here has already tried that.
Has someone in here tried that? or any ideas of how efficient it would be compared to the standard temporal pooler?

Do you mean passing the active columns of the SP up into another SP as the feedforward input? Or part of the FF input? If so what would the other FF input be? From what I understand layers to be doing, I don’t think this happens in the brain without cells activating within each mini-column, in which case the active cells are the output, not the active columns.

Remember the SP doesn’t deal with sequences at all, only spatial patterns. The active columns from the SP contain no sequence information at all. Or am I misunderstanding your idea?

I think it’s just a straightforward application of recurrent networks in traditional ML. So the FF input to the spatial pooler is the regular input concatenated with the set of previously active columns. Could make sense. And you can make up an explanation for how the brain could do this (a cell in L5 that always activates when anyone in its column activates, and sends an efference copy back to its own region through the thalamic FF pathway).

But how would those active columns be represented as bits in an SDR?

In the simplest case, the set of active columns is just a sparse binary matrix like any other SDR.

input of this region being the new input from the system, plus (concatenated with) the output of this same region in the previous timestep. this is the only way traditional ANNs learn sequences.

It sounds as though you would be reusing the SP to also perform TM, right? I assume the way to implement this would be to have one cell per column with just a whole lot more columns to ensure there is enough capacity (or eliminate the concept of “columns” altogether and simply use SP to activate individual cells)? Sorry for if that is a dumb question – I’m not familiar with this strategy in traditional ANNs :slight_smile:

1 Like

True. I’m thinking of this from a biological point of view, and just trying to understand if this type of arrangement might be possible between 2 layers in a column. I think it is worthwhile to point out what ideas are biologically plausible and which are not. I don’t think this one is, but it is obviously useful to apply what we have learned from machine learning. If anyone builds something like this, I would be interested in seeing what it can do.

@ali_m I am moving this into #htm-hackers forum to get more of the hackers’ eyes on it. :wink:

As a jumping off point for a biological implementation, just imagine that 1) one particular excitatory cell in each column fires every time its column is active and 2) all the cells in a layer receive both basal and distal inputs from their neighbors in the region.

1 is plausible enough, and it certainly doesn’t need to be precisely every time the column fires, just most of the time.
2 is very plausible, in fact I would argue more plausible than assuming that recurrent connections happen magically only on distal segments.

It would just be tweaked slightly if you wanted it to happen e.g. between two different layers.

The intuition for this kind of architecture is that the patterns a region is detecting are partly feedforward and partly temporal, very much analogous to the temporal memory algorithm in HTM. Liquid state machines, echo state networks, LSTMs, they all do this.

1 Like

I like the direction of your idea, but what could be the learning mechanics in this case?

Yes, and it classifies every input+state but if you want prediction as output, there are several options that seem plausible, putting a TM on top of it but not looping its result back, is one of them. The probably more efficient i think is to pass the result of the recurrent SP to a 2-layer supervised learner which tries to predict the next input.

Well SP classifies input, the same way recurrent SP would classify input+state, (that’s just what recurrence does in traditional ANNs, turning a non-markov into a markov decision problem) but getting output is a different story, you should probably pass the result into something else like a TM or probably a 2-layer supervised learner.

My question was not about how it was going to work, but how to train it.
Is the idea to update SP’s connections permanences based on encoder output + SP’s previous states? The main result of such process would be just busting of the most permanent connections in the current sequence without information about the sequence itself.
RNNs use absolutely different approach, but I don’t see how it can be used for SP.
Could you elaborate on your understanding of it?

Not necessarily, what SP is supposed to do is to classify in such a way that more similarity in input results more similarity in output, right? So if there has been a somewhat similar input+state observed before, there would be similarities in output, there will be columns not bursting. Besides that, yes, here probably the better option is to have a spatial supervised learner to receive that input+state and try to predict the next input.

That’s what SP does by default - it learns to recognize patterns it has seen before, and every new encountering of a similar pattern increases its capability to do it.
From the perspective of SP input (the output of encoder) + state (the state of the links to encoder) don’t contain any information about the sequence of patterns - it’s work of TM to memorize it.

Nevertheless, it’s important to understand that it’s wrong to talk about SP and TM as different parts of the model. They separated artificially just for the convenience of software implementation. SP is just a feedforward input for columns of TM.