I’ve been spending some time breaking down cortical theory into small pieces. I believe Hebbian learning is the basis of cortical function. So I’ve produced a simulation to illustrate this fundamental operation.
The idea is - when a pattern of activity (shown in the top row of cells) consistently activates the post-synaptic cell (the singular cell) the synaptic connections between them strengthen, while those that do not contribute to the post-synaptic activity weaken.
As I understand it - dendrites grow synapses to any cell within their target layer - without any bias or preference. So the potential pool of synaptic connections is arbitrary, so the cell can activate to any arbitrary pattern of cells. But when there is enough repetition of consistent patterns (as you will notice with the activation of the last 4 cells) the synaptic connections will tune to that specific pattern. By the end of the learning you’ll notice that the cell will only activate to that pattern, no other. You can see in the visualization that the opacity of the connections represent the permanence of the synaptic connections. Although it shows that all 4 cells activates the target cell, it would only need a subset due to sub-sampling. Sub-sampling is more efficient in larger pools.
I wanted to post this to provide some visual analogy for one of the fundamental principles in HTM. Also, I would appreciate any corrections/additions to my explanation.
(The dendrite diverges into 8 connections to represent the 8 synapses of the segment. This model dendrite has 1 segment with 8 synapses)