Misunderstandings about the SDR Classifier

Hello,

I checked the documentation and the forum messages but I still have some points about the role of the classifier I’m not sure I have understood.
As I understood, the role of the classifier is mostly to interpret the activation pattern coming from the Temporal Memory region (that is, in some ways, convert it to a dense representation that we can use outside the HTM). But as I see we also use it to make associations between the current activation pattern and future values at different time steps. Why do we use the classifier to make predictions ? I thought it was the role of the temporal memory to make predictions about the next activation pattern at time step t+1. I’m confused because when using the classifier, I have the impression to bypass what the Temporal Memory region is already doing for the next time step.

My second question is : can we use the classifier to interpret only the activation and prediction patterns returned by the Temporal Memory region, without predicting the next time steps ? If we initialize it with 0 for the steps parameter, do we have an interpretation of what the Temporal Memory is currently returning at time step t ?

3 Likes

Using a SDRClassifder (or just assign different columns a TM different meaning) is only for the convenience of discussion. It is easier to say the generated patterns are A → B → D → C instead of column [0,1] → [2,3] → [6,7] → [4, 5] are active.

You’ll have to run the prediction algorithm before you get the prediction pattern in the first place… So… yeah

By resetting the TM, the contextual information is also cleared. This, TM will try its best to predict what will the next time step will be (this ability is also called First Order Memory) without any context. Also TM only predicts what comes next. If you are at time t, TM will predict what will be in t+1. i.e. If t=0, TM will predict what is possible at t=1.

5 Likes

Yes. If you set the steps parameter to [0] the SDRClassifier should classify just what it is seeing right at that moment.

It’s confusing, but in Nupic the SDRClassifier does both classification and prediction.

Also the TemporalMemory does predictions, but it only predicts exactly one timestep into the future.

4 Likes

Because the SDR classifier can! Joke aside, as far as I can tell the SDR classifier generates a distribution for each class label (using a NN) for a given step and input assuming inference is enabled. Consequently, prediction by the classifier is/can be done by repeating this operation (generate distributions) in every encountered step. So when the TM is at time step S and S + (2|3|5) were previously encountered (thus the classifier generated distributions) then the classifier can be used as a predictor for step 2, 3, and 5. It is confusing and I’m afraid that the code for this is also a bit difficult to comprehend.

As I study the TM and SP, I realized (yet my opinion) that it is useful or at least sometimes helpful to treat them as solution spaces. What I mean by this is that these algorithms create an abstract model of a particular classification/prediction problem in an agnostic manner. The model is what the input/output relationships they have built so far (e.g. encoder sdr → active columns and active columns → active cells) and agnostic because they mostly care about outputting set of columns/cells regardless of problem domain. Because they are solution spaces then they can be traversed or perhaps run with an algorithm - for example a classifier.

The TM builds a solution space which is the sequences it has predicted so far per time step. So there are at least three variables here, current pattern, next pattern, and the time step. The transition from the current pattern to the next pattern is intuitively a non-linear function coupled with the time step, then for a given question or inference there will be potentially multiple answers, this is where the classifier becomes handy because it can use a probability distribution.

2 Likes

Thanks for the answer. As I understand, the Temporal Memory carries informations about the past history and informations about possible futures. Using a classifier is more a way to interpret the information given by the Temporal Memory in intelligible way. Is it correct ?

3 Likes

Yes. You are right.

In some cases we may even simply run the output over a CategoryDecoder/ScalarDecoder (not sure how NuPIC calles it) to interpret the result. Its easier to setup and less pron to noise.

4 Likes

Yes I believe that is correct.

I have not authored any core nupic code (only as a nupic user) by the way so just to let you know that I have a slightly different perspective of interpreting what these algorithms do. When you search this forum for answers regarding this topic you will likely get a more of a mainstream ML perspective which is favorable for most people, but sometimes you end up asking WHY. The tradeoff though is that there is always a chance of ignoring the bigger picture or the abstract model of what these algorithms are using - this is a common scenario in mainstream DL, the whole NN is conveniently treated as a blackbox.

I have been studying (in-depth) these algorithms though and I’m particularly interested with how/why these algorithms work and as a computational machine (are we dealing with a new computational model?). This perspective in my opinion helps me to see these algorithms as solution space builders and also give attention to their internal operations which is a bonus in HTM (one cannot do this easily in DL). Most importantly with this perspective I can potentially apply different algorithms for these solution spaces.

Sorry to digress, I just would like to share my thoughts.

Based on my understanding the classifier is just one of those algorithms that can interpret the TM as a solution space so it does not really bypass it.

Update: By the way I saved some of my ideas by writing a medium page for the topic related to how the SP/TM can be viewed as a solution space at - A Spatial Pooler Model — Part 1. This article is all about an… | by Jose Cueto | Medium

4 Likes

Hello all,
I really appreciate this thread (@MaJ, @marty1885, @dmac, @Jose_Cueto). I finally got my code running and the TM trained. However, I was struggling with interpreting its output, especially since I have 2048 columns with 10 cells each. I will now use the SDRclassifier to make predictions for my sequence prediction problem. However, in the hotgym.py example, they use an object called Predictor in order to predict n steps ahead. I’m wondering, though, what is the difference between these two ways of prediction? I understand I can test the code and determine which is more accurate, but I mean conceptually, is there a difference? It seems to me they both create a NN to predict. I also wonder whether they use vanilla NNs, vanilla RNNs, or LSTM-RNNs?

Thanks again for sharing your thoughts/questions/answers/comments!

1 Like

First of all we must remember that the SDRclassifier is not an HTM algorithm. It is not biologically plausible. It is added to the htm.core library as a tool to visualize the SDRs that the SP/TM algorithm has produced by relating them back to the original input values. The brain does not need such a tool because it “thinks” in terms of the SDR pattern and not in terms of its input values.

This SDRclassifier is a vanilla NN tool and it has its own drawbacks. It requires a lot of data (like 1000’s of data points) to train it to recognize a single SDR pattern as having originated from a specific input. The HTM algorithms do not need that much training and settle into a result rather quickly (more like a few dozen data points, depending on parameters).

So you may be able to get the SP/TM algorithm to quickly learn to predict its next input but you will not be able to reliably relate that prediction to a specific input unless the SDRclassifier has been previously trained by looking at many samples of SDR’s that could be produced by that SP/TM algorithm from every possible input value.

The “Predictor” is just a lot of SDRclassifiers stuck together. Don’t try to read too much into it.

3 Likes