Say I have trained a Temporal Memory instance with letters (encoded as SDRs) to distinguish sequences like “GEC” or “DEF” and predict the letter C after GE or the letter F after input of DE, something like it was seen in this one video from the HTM School series (High Order Memory vs. Single Order Memory). How can I use the recognized sequences as input in another Spatial Pooler or Temporal Memory?
My first guess would be some sort of SDR hashing: copy those SDRs in a row that make up the sequence and form a hash value, which you in turn encode as an SDR.
Another idea would be to let the Spatial Pooler solve this task by taking the sequence as input for the SP - which unfortunately has the problem that the sequences can be of different lengths.
IMO hashing is a bad idea. It is certainly not what the brain does. A problem with hashing an SDR is that the hash depends on the exact value of the SDR, so if even a single cell’s activity is different then the hash will be totally different. Hashes don’t account for the semantic similarities between their inputs.
I’ve personally tested this solution and you can see my results at the end of the video I recorded about it. In the video: I created a temporal memory (TM) and fed words into it, one letter at a time. Then I used this method to make an SDR that represents the current word, given only the output of the TM.
Hi,
many many thanks! This experiment with Temporal Memory ( TM ), fed words and low-pass filter is very interesting. Do you have a script at hand showing how the experiment is set up?
Hi,
The experiment shown in that lecture uses my own implementation of an HTM, and I’ve got to warn you that it’s experimental (not production ready) and that I no longer maintain that implementation at all.