It makes sense that the local algorithms are only useful for a certain set of problems, but they are important problems. However, I believe it would be a good idea to optimize, or at least look for different ways of implementing things, because only being able to work on a 64x64 image limits being able to experiment with hierarchies on detailed spatial data.
There’s a way to optimize a HTM implementation by using a ‘propagation algorithm’. There is very little high-level operation - its mostly all local interactions between Cell and Dendrite/Segment objects. All the cells and dendrites are connected by object instance references. When a cell gets a feedforward propagation from a dendrite (segment activation) the cell then propagates forward to all the segments that connect to it (via the ‘axon’). When the segments reach their threshold they then forward propagate to their target cells.
The same general idea is used for local inhibition. When a cell propagates to segments connected to its ‘axon’ it can also propagate negative feedforward values to neighboring cells - which causes sparsity and competition. The structure of columns emerge from the schematic of cell classes/layers and local connectivity.
The benefit of local computation is that the limit on the number of cells and segments you have is based on computer memory capacity, not CPU/GPU processing capacity. No matter how many object instances you have (i.e gigabytes) the propagation algorithm will compute very fast as it only processes the sparse activated cells. Most cells are inactive (so therefore segments too) so there is very little iteration within each feedforward step. If there were 4098 cells in a region then only ~164 cells (4% sparsity) will need to propagate (be processed) in each step.
I found this works quite well, except the initialization of the region takes some time to construct all the objects. But from there it’s light.
Anyway, Sunday morning blabber. Need to eat breakfast!