Experiment: Signal generation multiple steps ahead of the present

I tried an interesting experiment and wanted to share it here. I was thinking about how to predict more than one step ahead–obviously predicting the signal only one step ahead is exactly what the predictive cells in a TM do. But what about more than that? What about generating an entire signal after training on that signal previously?

My initial thought was:

  1. Take the predictive cells at the end of the training sequence, and make them active.
  2. Calculate new predictive cells
  3. Return to step 1)

The problem with this is that if the TM is making multiple predictions, you’ll end up with more than the proper number of active cells, and no way to determine which ones to cut. So instead I came up with this method:
1)Train an ML regressor on the predictive cells during the training sequence
2)Take the predictive cells and translate them to a raw input value (I.E. a scalar)
3)Re-encode that value and submit it to the spatial pooler and temporal memory, generating new predictive cells.
4) Return to step 2)

I had some successes and some failures with this method. It hinges heavily on the ability of the TM to accurately predict the sequence (obviously) and also on the regressor to accurately translate the TM’s predictions. The more extraneous predictions that are made, the harder it is for the regressor to make its translations. I noticed that, when it does fail, the system had a tendency to get caught in small loops, repeating the same mini-sequence over and over or sometimes just a single value.

I’ll post a couple of example figures below. The red signal indicates the true inputs, and the green dots are the predictions made during training. The black dots are the predictions made after I turned off the input and made the system generate the signal on its own. Note that the range of the fourth axis in the figures doesn’t extend beyond the training period.

I’m not sure how much use this can have in practical applications since it seemed heavily dependent on a perfect, noiseless representation of the input. But I’ll keep testing and find out if I can make it work in noisy environments!

Here’s an example of a successful test run:

And here’s a failed attempt:

1 Like

Interesting!!! But, if I’m not mistaken, the network will stop learning new data streams while forecasting, correct?

1 Like

That is correct!

Just fyi there is a parameter for this in NuPIC, called predictionSteps. I don’t know the details here, like whether it makes the system predict every step on the way to predictionSteps. It is said to slow the system down, which makes me suspect it does.

I do know that it can be used for swarming, so you can get a config optimal for forecasting the predictedField predictionSteps steps ahead instead of 1.

1 Like

Ohh, interesting. Does that affect the learning updates as well, or does it just provide more long-term prediction?

edit: spelling

1 Like

Right good question, I’m not sure.

I searched the repo for predictionSteps and didn’t find any usages in TM functions. It seemed mostly related to swarming.

If I can bug certain people like @subutai @Scott I bet they’d know.

1 Like

I don’t know the details here, like whether it makes the system predict every step on the way to predictionSteps .

Yes, this is what it does.

It is said to slow the system down, which makes me suspect it does.

Yes, it does because it extensively uses vector-matrix multiplication.

2 Likes

Does that affect the learning updates as well

It does not impact the learning that is happening in the HTM region.

2 Likes