Need help creating a Network with multiple asynchronous inputs

I’m trying to utilize Nupic to set up my first HTM project.
I have 2 sets of inputs that I want to send into a SP-TM network but update them alternately.
For instance, I feed in a frame of values for input A, I want to look at the predicted values for the TM. Then I get a value for B, feed it in and look at the predicted values for the TM. I don’t want the TM to think that the values for A are getting repeated when I submit the update for B, and vice-versa.

How would I set up a network to support this?

Can I set up 2 separate Spatial Poolers? Can I link 2 SP into one TM?

1 Like

How are the A and B values related? What do they represent?

They are scalar values. Inputs and outputs of a continuous process.

Need more info! How are they related? Is B an output of A?

I’m asking because if they are not related, you should create a complete SP/TM model for each separately. If you believe that one field’s data contains information that would affect the data within the other, you might want to combine them to try and get better predictions for one of the fields.

Are you looking for predictions or anomaly indications?

Thanks!

I think I need to make separate SP/TM models for each one.

But what I want to do is to pull the active columns from SP/TM-A and feed that as an input into the other network, into the SP-B. (and vice-versa)

What would be the proper way of pulling the active columns from the SP/TM network to feed into the B network?
The B input comes in through an Encoder. Would I just append the active column data to the Encoder output and send it all to the SP?

In the end, I’m looking for predictions coming out of each of the TMs.

Why would you want to do that?

I believe I’ve developed a novel use for an HTM, but it requires this dependency between the two inputs.

Let me explore what I think might be a quirk/feature of the Temporal Memory algorithm.
Is the TM dependent only on changes in the inputs?
In the algorithm, the cells in a TM are in one of three states: inactive, active or predictive. If it gets the same inputs multiple steps in a row, the active cells stay active and inactive stay inactive. The predictive state cells would remain predictive or go inactive.
Or maybe I don’t fully understand the algorithm. When one cell in a column is active, could other cells in that column be predictive? So that when the same input is received, that column is still active, but a different cell becomes active because it was in a predictive state. This way a sequence with repeated values could still be identified.
Can a TM distinguish between BAAAAB and BAAB sequences?

1 Like

That’s not always true. If the same input is a part of a sequence, different cells in each column might fire to indicate where in the sequence that common spatial pattern is located.

I will be publishing a new HTM School video on exactly this topic tomorrow. Stay tuned to watch it. I think it will help you understand.

Thanks! I always enjoy your HTM School videos. :grinning: :+1:

1 Like

I have contemplated this question quite a bit myself as well. Where my current understanding of TM is at, I believe you can learn up to three variations of a repeated input in a row, but not more. So you could learn the difference between these patterns:

BAB
BAAB
BAAAB

but could not learn to distinguish between these patterns:
BAAAB
BAAAAAB
…etc

The reason is bursting causes cells representing earlier points in the sequence to become predictive and reused at later points in the same sequence.

Let me explain by considering a fresh system that hasn’t learned anything. We attempt to teach it the sequence BAAAAB.

Step 1 (B) Columns for B burst.
Step 2 (A) Columns for A burst, and A’ cells are chosen to learn the new sequence (BA’).
Step 3 (A) Columns for A burst, and A’’ cells are chosen to learn the new sequence (BA’Aβ€™β€˜).
Step 4 (A) Columns for A burst, and Aβ€™β€˜β€™ cells are chosen to learn the new sequence (BA’Aβ€™β€˜Aβ€™β€˜β€™). Note that at this step, the cells from step 3 A’’ become predictive, because they have distal connections to cells in the columns for A, which is currently bursting).
Step 5 (A) Cells for sequence BA’A’’ become active (these cells now also represent BA’Aβ€™β€˜Aβ€™β€˜β€˜A’’).
Step 6 (B) Columns for B burst, and B’ cells are chosen to learn the new sequence BA’Aβ€™β€˜A’'β€˜Aβ€™β€˜B’.

You can see that the encoding for BA’Aβ€™β€˜Aβ€™β€˜β€˜Aβ€™β€˜B’ is equivalent to BA’Aβ€™β€˜B’, BA’Aβ€™β€˜Aβ€™β€˜β€˜Aβ€™β€˜Aβ€™β€™β€˜Aβ€™β€˜B’, etc. The same representation of A ends up being reused at multiple points in the sequence.

1 Like

@Paul_Lamb, that is similar to what i see on repeated inputs though in a slightly different way.

I have to emphasize what you said again.

In addition, we assume that a single transition between two inputs are enough to form connected distal synapses.

It seems my understanding produces same results in a different way.
Step 1 (B) Columns for B burst. No predictive cells.
Step 2 (A) Columns for A burst, and A’ cells are chosen to learn the new sequence (BA’). No predictive cells.
Step 3 (A) Columns for A burst, and A’’ cells are chosen to learn the new sequence (BA’Aβ€™β€˜). A’ cells are predictive because of bursting A column.
Step 4 (A) Cells for sequence BA’ become active (these cells now also represent BA’Aβ€™β€˜A’). A’’ cells are predictive because of A’ cells.
Step 5 (A) Cells for sequence BA’A’’ become active (these cells now also represent BA’Aβ€™β€˜A’Aβ€™β€˜). A’ cells are predictive because of A’’ cells.
Step 6 (B) Columns for B burst, and B’ cells are chosen to learn the new sequence BA’Aβ€™β€˜A’Aβ€™β€˜B’. A’ cells are predictive because of bursting B column.

You can see that the encoding for BA’Aβ€™β€˜A’Aβ€™β€˜B’ is equivalent to BA’Aβ€™β€˜B’, BA’Aβ€™β€˜A’Aβ€™β€˜A’Aβ€™β€˜B’, etc.

I believe the algorithm computes distal depolarization at the end of an iteration after adapting and forming new synapses on chosen distal segments. So freshly created segments would successfully depolarize target cells in the same step.

Yes. Implementation wise, an active cell can even be predictive at the same time if I am not wrong. On the biological front I am not sure how that could be. There is nothing preventing from a cell being predictive and active at the same in the implementation as far as I am aware.

To extend this question, a cell can even have distal connections onto itself and there aren’t any explicit checks for that. Are there? Should there be any?

That is a subtle difference from my current TM implementation, which looks at states from previous timestep (based on my interpretation of the TM whitepaper pseudocode) Thanks for pointing this out … I am sure that change will have some interesting behavior differences (besides what you just pointed out here).

Going slightly off topic for a minute, another interesting thought is that use of hierarchies should in theory eliminate some of the ambiguity of reused representations. For example, imagine a simplistic/naive implementation of hierarchy which considers two outputs from a lower region as a single input to a higher region. The sequence BA’Aβ€™β€˜Aβ€™β€˜β€˜Aβ€™β€˜Aβ€™β€™β€˜Aβ€™β€˜B’ in this implementation would be encoded as higher-level sequence 1-2’-2β€™β€˜-3’, whereas sequence BA’Aβ€™β€˜B’ would be encoded as higher-level sequence 1-3’'. If you add in a third region would give you sequences XY’ versus Z (i.e. completely different sets of columns involved in representing the two sequences).

This gives me some encouragement that the ambiguity problems in HTM due to input representations being reused will not be as big of a problem in the future once hierarchies have been implemented.

Can you elaborate on this a bit? Temporal pooling of some sorts?

I’ll start another thread to talk about hierarchy and temporal pooling since it is a bit off the original topic.

1 Like