Was just watching Numenta’s research meeting of May 10, in which Jeff talks about object representations on top of temporal memory (box 2): https://youtu.be/etSNEYeC01Q?t=5m12s
Is there any more info on how that works? It may be the missing link in my understanding of the hierarchical bit, because if you have some aggregation over time into a stable pattern, you would have a representation of a sequence right there. If you feed that to the next TM, you can recognize sequences of sequences, at which point you can climb to arbitrary levels.
Very curious to find out more on that topic
Also: why does it feed into the TM? I would say it should be the result of the TM, not?
I think I read somewhere that they’re working on a write up. Have you seen the cosyne poster? There’s a thread with it. Published Talks On Cortical Columns Yet?
I think that this role is filled by temporal pooling.
This is an interesting read, although I’m unsure if there is any newer information available.
You can see implementations in nupic and nupic.research.
Also: why does it feed into the TM? I would say it should be the result of the TM, not?
I’d think it is bidirectional. The TP learns stable sequences from TM, and TM uses knowledge about the current sequence in TP to improve prediction on the current step. Unsure if it is actually implemented that way though.