Can HTM predict previous inputs?

Other than entering the data in reverse order, can HTM predict previous inputs? A case where this would be useful would be in question answering. Taking the “what does the fox eat” example further, we may wish to ask questions like “What eats mice?”, where the data contains factoids like “foxes eat mice”. Input “eats mice” and predict “foxes”.

Good point! Technically yes by walking the synapses backward - not saying that anyone has done it. And it seems to be biologically impossible.

I don’t know how this coule be solved in HTM. Maybe some one else here can?

I think the hierarchy is needed to solve this being biologically plausible. Reversing the sequence order would not be a solution, for example: you learned the ABCD sequence and want to figure out which is the first element of the ?BCD, so you would need to learn DCBA to predict A after the DCB, but I don’t think that’s what you want.

1 Like

Does the following method of predicting previous inputs work? … The pattern of active cells in the TM is are all in the columns of the last input, but the activation within each column is determined by the full sequence of prior inputs. Thus we could train a classifier acting on the TM active cells to recognise labelled full sequences. But what I am not clear about, is if we then apply an input sequence which is missing the first element, will the TM and its classified provide a probability of each of the possible full sequences that have previously been learned, and hence if suitably labelled by the classifier the missing first input (or more previous inputs) could be deduced?

During sleep, sequences of firing replay in the hippocampus and I think I remember also cortex. They can play in reverse. It probably has something to do with locations and paths in environments.

1 Like

Yes I did not want to learn the sequence DCBA in to predict A from DCB. Your point about using a hierarchy is an interesting one – can you explain further? Perhaps one way to do it is to copy the SDR of the first term to the “upper level”, providing persistence of the “subject” as the subsequent terms in the sequence are learned, with apic feedback of the subject as each successive term is learned? But I am not sure how to actually do this with the nupic api? This seems to need “research code”, for example the Apical Temporal Memory. I have also found discussion of "Extended Temporal Memory " but the links to the code are broken so I am presuming that Apical Temporal Memory has superseded it? Has anyone got any experience of using Apical Temporal Memory (and are there other research codes that are relevant to this conversation?).

Also, I am not really clear how apical feedback differs from simply adding a new basal input which is linked together to hold the “upper level” data (the “subject” in this case, but also other contextual information). This would seem not to require any new nupic code, just a bit of plumbing.

My opinion that this issue needs the hierarchy is because, if I have correctly understood your question, the sequence memory does not seem to be sufficient to solve this by itself, I think it is a process that involves semantic memory and, as far as I know, the semantic knowledge is stored in distinct associative cortices and its evocation depends on the pre-frontal cortex. I started studying the HTM theory a short time ago and I’m not familiar with the nupic, so I can not help you much more, it would be better someone with more experience to comment on it.
this conversation may be relevant:

While not specifically a NuPIC answer, I think this could be done relatively easily without a hierarchy by using a similar classifier strategy as has been used for predicting multiple timesteps into the future (see this video for an example of how that is done). The difference would be to have it tracking the frequency of inputs from some number of timesteps in the past.

1 Like

Thanks for the link to the “alternative sequence memory” topic - a very interesting discussion there.

1 Like