"Untangling Sequences" revisited


The Numenta preprint “Untangling Sequences: Behavior vs. External Causes” (@subutai and @jhawkins 2017) described above reports experiments run with the htmresearch combined_sequences project. That project used independent temporal memory and sensorimotor inference models (as described in Numenta’s previous “neurons” and “columns” papers).

The HTM-scheme untangling_sequences project implements some modifications: L4 temporal memory and sensorimotor inference combined in a unified minicolumn structure, apical feedback from the L2 column pooler to both components of this layer, and multiple cortical columns. The L4 and L2 algorithms are Scheme translations of htmresearch’s apical_tiebreak_temporal_memory.py and column_pooler.py with the same function structure, aiming to replicate Numenta computations. Connections between layers are defined by the “higher-order” algorithm L2objL4locL4seq.ss’s compute procedure; there is an ascii art diagram of the model layers and connections.

When run without the connection modifications, and with htmresearch/combined_sequences.py parameters (1024 minicolumns, 500 features) HTM-scheme produces plots comparable to the headline figure from the paper above:


Swapping in the modified model, with all other parameters unchanged, produces:


The modified model also works with 150 minicolumns, as used in the columns paper experiments (sparsity 4%: 6 input bits/150):


And with 7 cortical columns of 150 minicolumns (showing means of column values):


(this plot also used interleaved training of objects and sequences)