Getting predictions about one region from another

Hello Nupic, I’m testing out the new site and though I’d ask something I asked some time ago, but was unresolved at the time. Say I have a model like this:

A meaningful “sequence of outputs” from the TM1 is accumulated (i.e.: The set of activeCells corresponding for each input is OR’ed with the previous ones) in one array and is fed to the GeneralTM. The same is done for the TM2. When a sequence is feeded from the TM1 to the GeneralTM, it passes down its predictiveCells to bias the activation of cells in the TM2.

That’s an attempt to a simple hierarchy. Now, the application is supposed to predict the values that would enter in sensor2, given a sequence of inputs from the sensor1. I attempted training a CLAClassifier feeding the inputs from sensor1 and then the inputs from sensor2. The prediction works sometimes but it’s not restricted to predict values from sensor2, so sometimes it also predicts values from sensor1. How can I “restrict” the predictions?

I think that if there was a way to map columns to input bits, sort of like the mapCellsToColumns in the TM, I could use the decode method of my encoder. But I think the inner mechanics of the SP would make this rather difficult, yes?

1 Like

Hi larvasapiens,

I’m not sure I understand this.

Is TM1 thought of as the “training” sequence, and TM2 the “test”?

And is the “simple hierarchy” the fact of OR’ing all the inputs?

Naively, it reminds me of the “concatenated”/“context space” stuff I did together with @floybix late last year. In that too, we had an SDR with all successive sequences collapsed together. And this was used to give more context for predictions.

I’ve since decided you don’t need to OR anything (if that is the equivalent.) All the information you get from OR’ing your columns is left in the transition memory anyway. You only need the connections, and they are in the TM. You don’t need to leave the columns switched on.

You do get more information. Felix used it to demonstrate perfect recall of sequences:

http://viewer.gorilla-repl.org/view.html?source=gist&id=95da4401dc7293e02df3&filename=seq-replay.clj

You might see a way to map columns to input bits in that. I think Felix did that. He wanted to recover inputs too.

The trick is to then structure this “OR’ed” representation to get true hierarchy. Which is the same thing as trying to structure the connection matrix of the TM. Which is the thing I am trying to do at the moment.

1 Like

Rob, this is the NuPIC section of the forum. Unless you are commenting about the NuPIC codebase, it’s off-topic. This is basically our new development mailing list, not a place for theory discussions. Feel free to “Reply as a linked topic” into HTM Theory if you wish.

Hi,

I would guess that if you want a prediction of input into Sensor 2 as a result of what is input into Sensor 1, you would have to make sure you always feed the input in an interleaved fashion.

  1. Input Sensor 1
  2. Input Sensor 2
  3. Input Sensor 1
  4. Input Sensor 2

But, there is a problem with the “theoretical” part of the problem (at least in my mind), which is why should the inputs in Sensor 1 be at all correlated with the inputs in Sensor 2? (i.e. what does the neocortical matter of Sensor 1 “know” about that of Sensor 2?) They don’t share the same column/cell matrix do they? (which incidentally you could do using HTM.java which might be worth a try?) What I mean is, use the Layer.using() method to take the Connections of one and insert them into the other Layer hierarchy using this method: https://github.com/numenta/htm.java/blob/master/src/main/java/org/numenta/nupic/network/Layer.java#L712

I’ve never tried any of this, so no guarantees - but it might be worth a try? With the Python version, the Connections (and therefore the colum/cell matrix) are not separate from the Algorithms, so I’m thinking it won’t work in that version. (By work, I mean to share columns and cells across Network constructs.)

Additionally with the HTM.java NAPI, if you use the rxObservable methods like zip() or something like that you can combine the two hierarchies. There is a method that guarantees that you always take one from each input before combining the outputs of previous stages… I’m not sure if that’s zip() or not?

See: ReactiveX - Zip operator
and maybe…
See: ReactiveX - Merge operator
See: ReactiveX - Switch operator
See: ReactiveX - Join operator

Here’s the full list of operators: ReactiveX - Operators

HTM.java is designed to use these operators as first class objects that can be inserted directly into the Network ( you simply take the Observable you get from subscribing (the Layer.observe() or Network.observe() method returns an Observable), and you combine it with other Observables like Zip, Join, Merge etc… :wink:

Ok, now I’m getting excited. I had forgotten about the extensibility built into the NAPI! I think I’m going to experiment too! :wink:

I’d love to try HTM.Java, but I already have everything done in Python, and I’m just experimenting to see what comes out. The Sensor1 receives words and the Sensor2 receives an action associated with those words. I do feed the input in an intervaled fashion to the GeneralTM, but the input sequences rather than the individual inputs, as they vary in length. The trick I use for the representation is that I have the GeneralTM receiving a vector of size (n + m), where n is the number of cells from TM1 and m the number of cells from TM2.

robf, both TM1 and TM2 are trained, but only TM1 is fed when the training phase is done. I use the ORing thing because it’s a simil to have a representation of a sentence (being the individual inputs words). I know it’s naive, but I’ve got good results so far, without optimizing parameters and the CLAClassifier blindly predicting inputs from both TM1 and TM2.

Concatenating the vectors is totally legit (at least as far as I know - we do it in the MultiEncoder to combine the outputs of more than one Encoder into a single Encoding).

The idea of using the same column/cell matrix was so that the predictions of TM1 are correlated to the predictions of TM2 because TM1 is actually “seeing” the same input. On a theoretical level, I really can’t help you any more than this… sorry. :slight_smile:

Except to say, I’m suspicious and intrigued - but that I’m suspicious because I’m not sure that it should work?

I don’t understand what you mean.

I’m having a hard time finding that in the entry, probably because I’ve never seen Clojure code before :stuck_out_tongue:

Anyway, I think I’ll just keep track of the frequency of simultaneus activation for each column and input bit. If you know a better way of doing it, though, I’d appreciate you tell me.

In our Algorithms, there is a class called Connections that is used to contain and work with column and cell information (such as Dendrites and Synapses etc.)

see here: https://github.com/numenta/nupic/blob/master/src/nupic/research/temporal_memory.py#L99

The design of the Python algorithms is “stateful” meaning that every instance of a TemporalMemory and SpatialPooler have their own Connections class. This was separated (decoupled) in HTM.java for a number of different advantages, one of them being the ability to re-use the Connections object independently of the Algorithms. You can think of it like the Algorithms are “verbs/actions” and they act on the columns/cells etc. to update its state. The SpatialPooler in HTM.java and the TemporalMemory can act on any Connections object (it doesn’t care what neocortical constructs it’s operating on). As a result, you can have one or more parts of a given network “operate” on the same Connections object if you like…

1 Like

The way I did it was trace back connected synapses from columns to input bits. Nothing fancy about it. The hardest bit is decoding the input bits back to your domain values.

To trace back from higher layers (like your accumulated general TM if I understand it @larvasapiens) I used learning on apical dendrites, i.e. top-down feedback from higher layers . That’s described in the text (not code) of the link Rob sent. I won’t go into it here as it’s off-topic for NuPIC category.

1 Like

Ok, I get it know. I’m not sure, however, how could you take adventage of this feature. You say that a TemporalMemory can act on any Connections object, but what makes two TM different if it’s not their Connections? I guess it’d be very useful when testing different parameters. That way you don’t loose your trained Connections.

Yeah, that’s what I was thinking. I’ll probably skip the encoder and trace the actual values directly from the Columns. I don’t need to be accuarate with the reverse-process here :stuck_out_tongue:

I read that, and it has been bugging me since I read the paper about the sequence memory. As the GeneralTM is fed with the OR of all the active cells in a sequence, it’s prediction (which is also a whole sequence) is included in the predictiveCells of the TM2 for every input. This bias the cells that are going to be activated, but in the same way that a prediction generated by itself from a feedforward input would.

What makes them different is their algorithmic process. You can vary the algorithm of the TM or SP (such as in paCLA) without losing the trained data as you say. Plus there are many advantages when used in an infrastructure where decoupling can be taken advantage of. The methods of HTM.java algorithms are also fully functional which has an impact on the ability for Concurrency or reusing the same instance of the TM throughout an entire Hierarchy and Network. Not poo-pooing the Python version though! Steps were taken to isolate the functions of the Connections object into its own object after we discussed this, so taking it a step further to a “functional readiness” would be pretty trivial now that the Connections class has been isolated within the Python version. Serialization however could be improved by pulling the Connections object completely out, that way no algorithms need to be serialized. But, that’s a topic for another day. But I was just saying you could experiment with it today - within HTM.java!

Actually the short answer is, the algorithms have data, state, and behavior. You can change behavior without changing the data or the initial state. You can change data (swap Connections), without changing state (parameters) and behavior. The degree to which you can do this in an isolated fashion is relative to the amount of coupling in the design. Chetan was the original (I believe) person who thought of the idea of making the TM compute() method functional. I applied it throughout the entire TM and then to the SP and its entirety as well, because I really liked his idea and it made sense that the Algorithms be separate from the data (the column/cell matrix); and state (the Parameters).

Yeah, it’s a pity. It seems I shouldn’t have helped larvasapiens here by saying a solution exists elsewhere, because the solution is elsewhere and not in the NuPIC code.

I think this will discourage posting. Who needs to sit and worry if they’re committing a category error before they post or reply to a question.

Matt’s category is wrong anyway. The question is about NuPIC. If I post my answer to a NuPIC question outside the NuPIC section, just because the answer is outside NuPIC, then I’m committing a category error there too.

The question was about NuPIC, your answer was not. You could have linked it into another category, as I have instructed in several other posts.

We’re all just finding our way here. Who knows what will serve the community best? The idea of starting out very rigorously with our categorization is probably a good thing; we can always “loosen things up” later if we see it doesn’t serve us? But we’re all after the same thing which is a fun easy way to communicate that also serves our research; investigative and archiving/search needs. I’m sure we can work together to improve the interface but we’re going to need to let things “flush out” over time.

Agreed, but again, this is not the place for this discussion.

Hi larvasapiens
Can you please provide me, Python code, How I can set predictiveCells in TM. If I want to passes down its predictiveCells to bias the activation of cells in the other TM.

Can you explain this a bit more? You can simply change the state of the cells in memory, but that will not affect what the structure has already learned.

Are you referring to the activation of one TM region biasing another like apical dendrites do? If so I think the network API is the way to go