A Theory of How Columns in the Neocortex Enable Learning the Structure of the World

I’ve a question about HTM algorithm (based on the one described in
paper: A Theory of How Columns in the Neocortex Enable Learning the
Structure of the World). Here we want to present an hierarchical model
which has at least two set of columns(the output of a column A in first
set can be source of input for a column B in second set). The proximal
input and basal input for input layer in A can be provided by us(we can
use encoders to encode real world data or otherwise). But what is
proximal input and basal input for input layer in B?

Is proximal input for input layer in B come from the output layer in A?
So how about basal input, which works as location signal?

@jinws Just to clarify, this paper says nothing about the hierarchical structure. We are not making any theories right now involving hierarchy.

Reading through the paper it seems obvious to me (please correct me if I am wrong) that that the structure of the Input Layer of a column is equivalent to the (mini)Column, Cells, Connections, etc. objects of the NUPIC library and are acted on by the standard SP and TM algorithms:

In the first set of simulations the input layer of each column consists of 150 mini-columns, with 16 cells per mini-column, for a total of 2,400 cells.

I do not see, though, a corresponding data structure/algorithm for the Output layer of the column as described in the paper in the NUPIC code:

The output layer of each column consists of 4,096 cells, which are not arranged in mini-columns.

Is there code for this paper that illustrates the Output Layer? If not, is there any guidance/psuedo code for connecting the Output Layer to the Input Layer and for growing/removing connections to other Output cells in the column or other columns?

Thanks.

1 Like

Following links from the subtree of numenta/htmpapers for the paper will eventually lead to the algorithms used: input layer: ApicalTiebreakTemporalMemory (an extension of the standard TM algorithm) and output layer: ColumnPooler

3 Likes

Thanks, @rogert. I wasn’t aware of that tree. And thanks for going to the extra effort of highlighting the specific files.

2 Likes

Numenta code for this is in l2_l4_network_creation.py which has some layer/column diagrams (l2 = output layer, l4 = input layer). The HTM-scheme equivalent has a diagram which labels the connections between layers (the HTM-scheme project can replicate some of the figures in the “Columns” paper and also has a TM layer for the experiments in Numenta’s Untangling Sequences preprint).

3 Likes

Hi all,

I’m reading this paper and trying to understand how exactly regions, layers and connections are created and organized?

Here is the short explanation of region setup in experiment:

… In the first set of simulations the input layer of each column consists of 150 mini-columns, with 16 cells per mini-column, for a total of 2,400 cells. The output layer of each column consists of 4,096 cells, which are not arranged in mini-columns.The output layer contains inter-column and intra-column connections via the distal basal dendrite …

Looking for some more detailed illustration or code, which is creating regions. I tried t find it in the code in repo, but not sure if it is correct one.

Thanks in advance
Damir

These should get you closer there:

this should help with the region/area/map part of your question.
(These are all the same thing)

Thank you all for your answers. I’m looking for more concrete example or a “point” in the code, which:

  1. Creates cells without mini-column arrangement.
  2. What is exactly input/output between all of layers and regions?

You can see this in some of the experiment examples, like this:

VS

In this example, L2 has no minicolumn structure, so follow that code.

I think this block of code gets at what you’re looking for with inter-region links. There are several scripts in numenta/htmresearch/frameworks/layers which construct multi-region networks along these lines.

This is from: (https://github.com/numenta/htmresearch/blob/master/htmresearch/frameworks/layers/l2_l4_network_creation.py)

# Link L4 to L2
  network.link(L4ColumnName, L2ColumnName, "UniformLink", "",
               srcOutput="activeCells", destInput="feedforwardInput")
  network.link(L4ColumnName, L2ColumnName, "UniformLink", "",
               srcOutput="predictedActiveCells",
               destInput="feedforwardGrowthCandidates")

  # Link L2 feedback to L4
  if networkConfig.get("enableFeedback", True):
    network.link(L2ColumnName, L4ColumnName, "UniformLink", "",
                 srcOutput="feedForwardOutput", destInput="apicalInput",
                 propagationDelay=1)

In this case an L4 region (running basically the usual SP+TM process) activates an L2 region. The L2 region does Spatial Pooler activating and learning on the activeCells & predictedActiveCells from L4 respectively, the same way L4 does Spatial Pooling on an encoding vector from raw sensory data.

The L4 is also partially depolarized by the activeCells from L2. This means that bursting-column winner cells in L4 form Apical segments to cells in L2, as they form the usual Basal segments to previousWinnerCells from L4.

The Apical segments are different in that they cannot put their respective cells into the predictive state alone (without an active Basal segment). However if multiple cells in a column are predictive at once (have active Basal segments), any cells which also have active Apical segments will inhibit the Basal-only others.

So L2 cells are learning which cells from L4 to be activated by (like in SP), and L4 cells are learning which cells from L2 to be Apically depolarized by – as they are Basally depolarized by the previoiusWinnerCells. Here’s an attempt to capture this in a simple diagram:

Here’s another attempt to capture the signals passed between regions, this one on the L2456 network from:

I am very interested in how sensor and motor data come into macro cortical columns L2456 after encoding?
Other words: do all macro CC share the same SDR from sensor/motor? Or every CC owns one encoder of sensor data?

My understanding is that each CC is responsible for modeling a certain piece of sensory space – like the tip of the thumb or patch on the retina – which is input through layer 4.

Each CC is also linked to a set of other CC’s though L2/3 (and L5 I think - not an expert on this level), but the idea is that each CC is building its own model of the world. To do this they take as input some combination of raw sensory input and the output of other CC’s – it seems basically like “what am I seeing?” + “what do my neighboring brains think?”

3 Likes

@sheiser1 i totally agree with you that CC is responsible for pieces of sensor data. But by current experiments of Numenta at htmresearch we have only one data from one sensor. Their CC have own encoder, which connect to the same data of sensor!
I have both versions: CC embedded encoder and CC exclusive Encoder. But I really do not know which model is more biological?

1 Like

Again not the expert here, tho my sense is that there is overlap in the receptive fields of adjacent columns. So if you imagine your retina as a grid of columns (each modeling a certain slice of the visual space) it seems that groups of nearby columns would have disproportionate overlap between their receptive fields, and share disproportionate amounts of information among each other (as compared to among all columns in the population).

I agree that the CC’s as implemented in the Network API appear to only have several different sensorRegions. I remember ‘Sensor’, ‘coarseSensor’, & ‘locationInput’ from the L2456 network. Expansion on that seems like a critical piece to testing the fully sensory capacity of multi-column networks.

I think you are referring to how in our experiments we hard-code a movement encoding into the system, right? This certainly is a short-cut and a simplification of what is really probably happening.

1 Like

This post was flagged by the community and is temporarily hidden.