For better understanding how the Layers and Columns theory builds upon Temporal Memory theory:
I’m unclear on three basic points.
- The temporal sequence layer and the sensorimotor sequence layers are traversed by the same mini-column. So does that mean that when a burst happens, it activates every cell in one minicolumn in both layers? And conversely if a predicted cell exists, it inhibits every other cell in its minicolumn in both layers?
(In another part of the article, you say that having just one layer, where half the cells have a location input, and the other half have context inputs from adjacent cells, would also work. This sounds as if the layers are independent)
- There is a difference between incorrect predictions and no predictions. Does the article say that the temporal sequence layer simply makes very few predictions when faced with a sequence caused by feeling an object?
- Does the object layer have any effect on the temporal sequence layer?
As a far-fetched application, maybe you could use this (just the sensorimotor part) for encryption. You could have two patterns, one as the context pattern in the sensorimotor layer, and the other as the sensory pattern feeding that layer. Only with the correct context (location) pattern, and the correct sensory pattern, would you get a significant output. So one pattern would be the key to unlock the other.
Anyway, hoping for answers before I do a re-read.
Yes, in this paper the minicolumns in the two layers are independent.
It can make a reasonable number of predictions, but they will be pretty random. The number of correct predictions will be pretty small. You can see this difference in Figure 5A
Not in this paper. I thought about adding it (it should work fine) but I didn’t have time to experiment with that (was under a deadline ). We have done this elsewhere in a paper that is in progress. It would mainly impact noise robustness, which I didn’t really tackle in this paper.
I was reading a paper with content related to this topic.
In the paper, the authors say that the separation of the layers supports prediction and feedback on the success of that prediction. This includes considerable research to support the model referenced in the paper.
It goes on to build an intriguing system where the perception error and related learning is distributed along the what and where stream using local processing. Both the bottom up and top down information flow is described with an excellent exposition of how this all fits together. This model is more in-line with plausible biological function than anything coming out of the deep learning camp.
The part relevant to this discussion starts on page 4, left-hand side of the page. “How are the prediction and actual outcome separately represented, and how is the timing of the prediction and outcome coordinated & organized?” 
 Deep Predictive Learning: A Comprehensive Model of Three Visual Streams
5 posts were split to a new topic: When I look at what deep neural networks do
The HTM-scheme untangling_sequences project implements temporal sequence and sensorimotor inference algorithms combined in one minicolumn (@gidmeister 's point 1 above), apical feedback to both parts of layer 4, and multiple cortical columns.
It uses the HTM-scheme translations of column_pooler.py and apical_tiebreak_temporal_memory.py (which aim to be “htmresearch compatible”; attm is extended to allow unified minicolumn bursting)
The ascii art network diagram shows the connections.
I’ve put example output in a separate topic