Functional poolers

In current HTM theory, there are spatial and temporal poolers. In this post, I propose the introduction of functional poolers, and describe how they lead to the emergence of meaning (full paper).

According to HTM theory, basal dendrites represent expectations (about what element of the sequence will be perceived next) and have the function of contextualizing the feedforward input to the region on the temporal dimension - in other words, as an element of a sequence.

Similarly, I believe, apical dendrites, while they represent top-down expectations, have the function of contextualizing the feedforward input to the region on a non-temporal dimension - in other words, adding meaning to it.

Note: in this post I do not mention at all grid cells. It’s because the process described here is fully compatible with them, but does not require their presence to be explained.

How does a functional pooler works?

A functional pooler works exactly like a temporal pooler, except that neurons do not enter a predictive mode based on the activation status of neurons in the same region (detected through their basal dendrites), but based on the signals received on their apical dendrites coming from other parts of the cortex. Such information is used as “context”. Those not familiar with HTM theory might read a full description of a functional pooler in the paper linked at the top of this post.

In other words, the process is as follows:

  1. Information enters the region from its receptive field.
  2. The spatial pooler “selects” which columns to activate. The pattern of columns which activates represents the features of whatever object was observed.
  3. The temporal pooler “selects” some of the neurons in the active columns which will fire. The patterns of neurons which fire represents the position in the sequence occupied by the features recognized during the previous step.
  4. The functional pooler “selects” some of the neurons in the active columns which will fire. The patterns of neurons which fire represent “contextualized features”.
    It is important to understand that those contextualized features represent all associations between a pattern observed in the receptive field of the column (the content) and a pattern observed somewhere else in the cortex (the context). All associations lead to activation of neurons, regardless of relevancy. The two next steps will take care of weeding out the irrelevant ones.
  5. The neurons of the region become part of the receptive field of the next region in the hierarchy. Consequently, the spatial pooler of the next region tries to recognize patterns of activation in all the neurons of the region of steps 1-4: in other words, it tries to recognize patterns of contextualized features.
  6. The neurons of the first region (the one representing contextualized features) which were active and consistently led to the activation of a column in the second region will form strong synapses, whereas the neurons of the first region which were active but did not consistently led to the activation of columns in the second region will not form strong synapses.
    The consequence is that only relevant contextualized features will form synapses, and will be “taken into account” in future pattern recognitions.

I acknowledge that the 6 steps described above are not enough to extensively explain the process; at the top of this post there is a link to a paper with an extensive description.

The emergence of meaning

As information passes from region to region, it undergoes an alternation of pattern recognitions (performed by spatial poolers) and of contextualizations (performed by temporal and functional poolers).

Spatial poolers compress information. After all, they use a smaller number of bits (the number of columns) to represent information which was encoded with a large number of bits (the number of neurons in the receptive field).

Temporal and functional contextualization expand information. After all, they use a big number of bits (the number of neurons in the region) to represent information which was encoded with a small number of bits (the number of columns in the same region).

As information passes from region to region, it undergoes an alternation of expansions and compressions. The expansions provide additional information which tentatively expands the meaning of the information being feed-forwarded. The compressions take care of removing the additional information which was coincidental but not relevant. How they exactly do so is described in the next section. The information that gets retained - the one that forms patterns recognized by the next region - is meaning

In other words, as information passes from region to region, an expansion followed by a compression allows for meaning to get added - to emerge.

How is relevance of context determined?

Before information enters a region, it is encoded by a large number of neurons, forming the receptive field. When it enters the region, as it gets processed by the spatial pooler, the information gets represented by a much lower number of items: the columns. As less bits are available to encode information, a compression takes place.

After that the spatial pooler “decides” which columns to activate, the functional pooler and the spatial pooler “decide” which neurons in those columns activate. Because the number of bits increases (the number of neurons in a region is larger than the number of columns) an expansion takes place.

The dimension along which the compression takes place depends on the dimension(s) used by the previous region(s) to encode information. For example, information coming from regions receiving input from the retina is represented on visual dimension(s).
The dimensions along which expansion takes place depend on the dimension(s) used.

From the paragraphs above, it emerges that a single region cannot represent the full process. A full iteration of information processing consists in a spatial pooler followed by a functional pooler (and eventually a temporal one) followed by a spatial one, with the focus on the second spatial pooler, whose importance is in neglecting the neurons activated in the previous step which represent irrelevant information; in other words, letting only signal pass.

9 Likes

Hi @DellAnnaLuca, this sounds interesting thanks for sharing. What you’re describing in the functional pooler sounds like what I know as Temporal or Union Pooling. As I understand it, a second TM-like region is added on top of the first. This Region 2’s SP columns use the predictive cells (or some other subset) from the lower region as their receptive fields, instead of the encoding bits used by Region 1’s columns. Am I in the ballpark there? I know you may answer this precisely in the paper, just wanted to see how close I am to understanding the concept.

2 Likes

Yes, the highest region uses the nearby cells from the lowest region in its receptive field (not only the columns).

Precisely as it would work if 2 normal regions with spatial and temporal pooler only would be put one after the other: the second would use the cells of the first one as its receptive field.

3 Likes

Ok very cool. I can’t help but think that some people here have done some real experimenting with this setup. I’m certainly curious how the results of this come out, or even how to design an evaluation strategy for this situation. I think you’ve done some pretty deep work on this @Paul_Lamb? (if you don’t mind me dropping your name).

1 Like

Very cool work, @DellAnnaLuca. I’ve read through the paper, and I drew some diagrams to make sure I am understanding your architecture. Have a look:

Did I label all the pathways correctly?

2 Likes

Thank you for the sketch.

I did not understand the dark blue basal connection. I’d also add that the apical dendrites of the cells in the lowest SP do not only receive (context) data from the SP above it in the hierarchy, but also from other regions of the brain. I made this very rough sketch - I hope it clarifies. Later I’ll try to produce clearer sketches on the laptop.

One of the roles of apical dendrites - and functional contextualization - is to allow integration of information across channels.
In the image above, I depicted a few channels across which sensory data is received.
The lower SP on the left receives some information from the results of processing a different channel. This allows a region to represent differently two perceptual objects which look the same in-channel but have different ex-channel signatures.

(For the purposes of this example, please imagine that there are many levels, not only 2. I do not think that the first region to receive sensory feedback already performs across-channels integration - this is probably reserved to the upper areas. Such lowest regions probably mostly use apical input from the region above them.)

1 Like

To clarify:

A spatial pooler takes some patterns as inputs and its role is to take those which are morphologically similar and to represent them as morphologically identical.

A functional pooler takes those patterns that came out from the spatial pooler and are morphologically similar but semantically different (because they have different context, as perceived through the apical dendrites) and makes them morphologically different, so that the next spatial pooler to process them (in the upper region) does not conflate them.

Here is why I drew that:

This seems to indicate a basal connection between the two SPs?

To be explicit about this, you are talking about hierarchy here. Each SP in this model you present is in a different level of a hierarchy. These SPs are not operating as if they were layers within the same cortical column.

1 Like

Also @DellAnnaLuca do you have any code behind this? Any simulations?

2 Likes

Yes exactly, I intended levels of the hierarchy.

I am in the process of working on this.
Nothing ready yet.

1 Like

I’m adding two sketches to clarify, based on the feedback received over the last couple of days.

The first sketch clarifies the routes followed by information traversing a region and the type of dendrites through which it enters it.

The second sketch clarifies how regions are organized.
In particular, it shows where the apical dendrites receive information from.
For the sake of simplicity, basal dendrites are not shown here.

The two charts on the right side explain how the complexity of information and its meaning evolve.

The first represents the complexity* of the information as it traverses the regions upwards in the hierarchy, measured on the dimension(s) of the initial sensory input. For example, if the initial sensory input came from the retina, the chart would measure the complexity of the information contained in the regions on the visual dimension.
It is intuitive to imagine that the first region receiving visual input represents visual information (for example, lines), whereas a few regions later the information might represent, for example, a red ball. The complexity of the visual information decreased. There are not anymore hundred of small lines forming a circle and a few patches of color, but a simple information - a red ball.
Each time information passes through a spatial pooler, its complexity* on the dimension of the input is reduced. In the chart, the steps in the line take place in correspondence of information entering the region and encountering the spatial pooler.

(*): by complexity I intend the number of “bits” or neurons necessary to describe a piece of information. A parallel could be drawn to the concept of entropy.

The second chart represents the complexity of information as it it traverses the regions upwards in the hierarchy, measured on all dimension(s) other than that of the initial sensory input.
Continuing the previous example, whereas the first region receiving sensory input recognized hundreds of lines and a few patches of color, a region further up in the hierarchy might represent “a red ball”. Whereas the quantity / complexity of visual information decreased, the quantity / complexity of contextual information (meaning) increased. The piece of information “a hundred lines and a few patches of color” does not contain (or better, does not lead to) information about how I can interact with the object or what it means for me; conversely, “a red ball” is much more informative on the dimension of meaning.
Each time information passes through a functional pooler, its complexity* on the dimension of meaning is increased. In the chart, the steps in the line take place in correspondence of information encountering a functional pooler.

5 Likes

Interesting. This seems comparable with dimensionality reduction in convolutional neural networks. As pooling (eg. max pooling) operations reduce dimensions, broader features (eg. lines > shapes) are ascertained.

The crux is of course learning concepts such as the red ball to begin with. CNNs and such supervised learning algorithms require massive quantities of training samples and labels to do this, so I imagine your functional pooler solution is no different in that sense?

I am currently working on having a full prototype able to be trained with a broad dataset.

I want to point out that dimensionality reduction is not the main point by itself.
The point is dimensionality reduction alternated to dimensionality increase over another (preferably orthogonal) dimension. This is what allows meaning to emerge.

Based on my understanding, a system which only reduces dimensions would be able to perform well on a classification task or in an anomaly detection task but would perform poorly in proactive tasks (especially in those in which meaning cannot be ascertained by a reduction followed by a lookup to a memory address but has to be inferred in other ways).

1 Like