What is the Basic Cortical Column Computation?

I struggle to pin down exactly what it is that a macro-column is doing; there is a lot going on here.

  • Pattern matching (by pattern recall)
  • Novelty detection (by pattern recall mismatch)
  • Pattern learning (by novelty triggering learning)
  • Anomaly signaling (by sequence mismatch mechanism)
  • Sequence learning (by anomaly signal triggering learning)
  • Prediction (by sequence learning)
  • “Memory” (by the collective action of these mechanisms?)

So we add these ensemble macro behaviors:

  • Classifier (by pattern matching)
  • filtering (by pattern matching / inhibition of competing patterns)
  • clustering (by lateral binding)
  • “labeling” (by position-coding)
  • And possibly pattern to pattern matching (different layers) which may be considered mapping through higher dimensional manifold space
  • transfer / consolidation (by spike timing comparison between EC/HC and connected cortex during spindle waves)
2 Likes

You’ve used the word “filter” in the past, and I think that can also be a fit description of what the circuit is doing at times. Matching is like filtering. I think there is pattern matching going on in different ways in different layers. Some layers could be representing different unique spaces, which opens up a lot of possibilities.

1 Like

My 2 cents. When treating the wave of synapses as a simple computing unit, one might say that the macrocolumns organize themselves to represent a pattern - active columns. Sometimes my intuition tells me that the SP is a classifier already, using group of columns as labels.

1 Like

What they don’t seem to get in the deep learning community is that the biological brain is absolutely brim full of memory, in the form of synapses, trillions of the things. Of course it makes sense to combine memory with deep learning.
I have been thinking about all kinds of complicated gating for reading, writing, erasing memory etc. However in a massive memory system are such complicated processes necessary? If you don’t want to store a memory write it in the “wrong” place. If you don’t want to recall a memory read from the “wrong” place. The pattern sensitivity of deep networks means reading completely unexpected random patterns will produce a weak to null response. Do you ever really need to erase memory? Why must that be? You can overwrite data at the same address again if necessary.
The Linux file system has a null file that you can read from or write to with null effect, which is a vaguely similar idea. I guess people in the deep learning community are thinking of memory as a limited resource, instead of understanding that it is a virtually unlimited resource using SSDs. And that memory technology still follows Moore’s Law unlike CPUs/GPUs which have already saturated at the top of the sigma curve of exponential growth in a limited medium.

2 Likes

Instead of asking what the cortex is “doing”, I ask what its “output represents”:

I think that every area of the cortex outputs an SDR representation of:

  1. What it is seeing,
  2. How it is changing.

These are the two fundamental things which you need to know in order to interact with the world around you. This hypothesis divides all information into two categories: “static” and “dynamic”. The upper layers of the cortex look for the static information, the lower layers look for dynamic information.

Structure come first. So what structure can provide that functionality

A MODEL’s of object,concepts

  1. Reference frame : like in Relativity
  2. Metric : grid
  3. Memory : Variable order sequences of every interaction

Pattern matching (by pattern recall)
> partial seq match
Novelty detection (by pattern recall mismatch)
> next item in the seq didnt match
Pattern learning (by novelty triggering learning)
> new Var-order-subsequence : 12345..12345... 123789
Anomaly signaling (by sequence mismatch mechanism)
Sequence learning (by anomaly signal triggering learning)
> TM
Prediction (by sequence learning)
> VOS again
“Memory” (by the collective action of these mechanisms?)
> models, every model is in its own metric and RF