Inefficiency of HTM topology from a ML standpoint

The difference between “stacking” and “connecting” cortical columns is the difference between hierarchical connections and lateral voting, which informs object pooling. If CCs are all emitting a representation to neighbors, they can use each other’s reps as contextual input to internal computation of feed forward input. It helps to think of attractor dynamics. In TBT, this lateral voting is not necessary to build an object model, but it helps a lot in practice.

Stacking a hierarchy is another story altogether. I’m still trying to figure it out. I think it involves thalamus and object abstraction, but that’s just me spit-balling on a Monday.

In any case, ask where topology matters? It matters a lot in the lower levels of hierarchy, but as you ascend, the topology gets mangled and mashed so that abstractions can be made (right @Bitking?)

2 Likes