Extending predictions further back in time

I have another question which is in a similar vein to my last one, related to predictive states in the temporal memory process which I had overlooked and left out in my test implementation. In the HTM Whitepaper, the pseudocode describing the TM process includes a step in which a distal dendrite on a predictive cell is chosen to match with cells that were active in t-1. If I am reading the pseudocode correctly, doing so at the point in the process described, would effectively connect the cell so that it becomes predictive two time steps before it is expected to become active in a sequence. For example, having learned the sequence “ABC”, input “A” then both “B” and “C” would become predictive. This understanding is further supported by a comment in the whitepaper:

However, we always want to extend predictions further back in time if possible. Thus, we pick a second dendrite segment on the same cell to train. For the second segment we choose the one that best matches the state of the system in the previous time step.

I have not identified specifically where this is implemented in NuPIC (the code layout is a bit different than that of the pseudocode in the whitepaper – need to study it some more to fully recognize the different timesteps referenced in the process). That said, I assume NuPIC performs this step as well (correct me if I am wrong).

It seems to me that for things like predicting objects based on features being input (i.e. cases where sequential order is not as important), this is a very good property. However, in use cases where specific sequential order does matter, it seems to me that it would be undesirable to have cells predict that they are going to be active more than one step into the future (makes it harder to distinguish what will happen specifically next). I know of course that in biological systems, there isn’t really a concept of discrete time, so my guess is this is designed to make the system better match biology.

Just wondering if there are insights into the benefits of putting cells into predictive state further and further in advance of when they will actually become active. In particular, what sort of negative repercussions would be expected if this step in the process is not done.

The white paper is outdated. In the current temporal memory implementation, we no longer extend predictions further back in time. Instead, the predictions are “implicit” and requires a classifier to map it to a prediction value. In fact, you don’t need to extend predictions further back to make predictions many steps ahead. This is because the SDR representation already contains the relevant information.

Take the ABCD vs. XBCY sequence as an example. In the current TM, a different set of cells in the C columns are predicted after AB or XB. Although D and Y are not explicitly predicted at this time point, one can easily decode that with the SDRClassifier. With the old TM, I guess both CD or CY will be predicted, the disadvantage of the old approach is the prediction is much less specific. You won’t be able to distinguish CD vs DC since both D and C are predicted at B.

2 Likes

@Paul_Lamb The pseudocode for the TM algorithm is here.

3 Likes

Cool… the three relevant lines in Phase 2 have been removed. Guess I should have compared them more closely before posting. :blush: