Is there an HTM equivalent of Transformer’s Self-Attention? I have been following the object detection papers with their emphasis on features and location/movement. In the code, the objects of Column Pooler, Apical Tiebreak Memory, and Grid Cell all make sense. From what I can tell, though, each sensation has equal weighting.
Of course, there are connections and activations at play. Insufficient support for active cells coming out of a sensation might not produce enough for the Column Pooler to change its representation greatly. Still, we are walking through each sensation/movement sequentially.
I suspect there are many reaading these forums who have a better grasp on these concepts from and can explain where/how the idea of self-attention might be found in HTM. To help in the discussion I propose the following experiment definition.
Given sets 10 identical coins, with samples arranged in the following, specific patterns, identify the set.
1. H H H H H H H H H H
2. T T T T T T T T T T
3. H H H H H T T T T T
4. T T T T T H H H H H
The exisiting L2L4Experiement from the “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World” paper would likely have no problem with this. Heck, a simple TM and classifer would ace it, too. But I want to point out that we do not need all the sensations to make this work. Really we just need positions 5 and 6. Of course, those positions are specific to this example. In another one it may be positions 100, 250, and 381.
Is there something in HTM that would tell the model to just focus on 5 and 6 or to give more weight to 5 and 6 than the other sensations?