HTM Mini-Columns into Hexagonal Grids!

Ok - biological justification for a proposed learning rule:


We have a stream of spikes coming at us from senses or a lower level.

  • We try to fire but we are suppressed in the grid competition - no firing spikes to learn, so no learning at all.
  • We are the winner of the grid competition so we are not suppressed- our firing is in response to the input triggers spike timing learning as we freely respond to the input
  • Stimulation from successful grid formation increases the firing rate. We now fire even faster in response to the incoming spike train.
  • At some point we are firing at the same rate and in phase with the inputs, perhaps even faster with the grid drive - and if the hex-grid rate exceeds the rate of the input there can be negative learning. A very local form of negative feedback - cool your jets hotshot!

Note that even though we have described L5 input bounced through thalamus relays as axonal projections, lateral axonal projections, and apical projections, we assumed that those apical projections are from lower levels. In fact, we know that L2/3 has reciprocal projections with related maps in the hierarchy. This implies that as we go into grid resonance this will provide a significant spike train for the related area in the next map to become activated, and it will respond by projecting a similar spike train back to our general vicinity. Since we know that map-2-map fiber bundles maintain topology this should work to cement the bond between the hex pattern in this map and whatever related pattern is forming in the next map. This is what I was getting at when I mentioned hierarchy in the main hex-grid post.

Since we are not excited about embracing a full-on spike based system (at least I am not) we will use an activation value to stand for the spike rate. Likewise, the synapse values could be a scaler. 8 bit values should be more than sufficient to capture real neuron behavior. (Actually- 4 bit values should be sufficient!)

So a simplified learning rule that can be used to write code:

Note: I envision this pooling algorithm running at gamma rate so 4 rounds of this competition for every alpha rate cycle.

Tally activation inputs and drive lateral axonal output that activate local inhibitor field. If you are running map-2map connections update these at the same time. Tally resulting cell inputs including inhibition. Repeat 3x.

On the final round …
If suppressed to silence, learn nothing.
If our activation is above some threshold, also do nothing as we clearly don’t need to learn anything else.
Otherwise, strengthen all active inputs.
This will boost anyone that is a winner of local competition, whether part of a grid or not. The outputs from these winners will learn to hook up grid connections later when they get strong enough.

Variation #1: tally inputs and if not suppressed, apply learning based on this formula:
Some Max learning minus activation tally. A slightly trained cell will learn fast, an iffy cell will get a boost, and a well trained cell learns nothing.

4 Likes