Does HTM have an explanation for chunking, magnification, and remapping in the cortex? If so, how do these behaviors come about?

There are three phenomena in the brain that fascinate me: chunking, magnification, and cortical remapping. I’m not sure if all of them can be ascribed to activity within the neocortex, but from my limited understanding, I think this is so.

How does HTM explain them?

3 Likes

I’m still new myself, but I believe chunking can be explained within HTM. Thoughts and inputs are represented as SDRs moving up and down the hierarchy. If a given collection of concepts are presented simultaneously often enough, a subset of minicolumns (neurons) would become sensitive to that collection, and thereafter those neurons would represent the new chunk.

1 Like

I can’t answer most of your question but I will tackle the chunking part. It is an emergent property of system level operation.

The current “contents of consciousness” - the collection of interlocking maps in the brain, each holding some part of the parsed here and now; a basket of interlocking features that combine to describe some unique collection that we take as a chunk. You have two halves of your brain that mostly are duplicates of each other and it does not take the entire brain to form a content. Humans seem to be stuck at about 4 or so simultaneous patterns before they interfere with each other. I think that this is related to how we process tuples in the loop of consciousness.

I take is as a matter of faith that the brain processes tuples serially.

(Tuple = object-relationship-object)

Thing one is recalled/perceived and the loop of consciousness projects thing two with some relationship object to be perceived, evolving to thing three.

See “loop of consciousness”:

See “contents of consciousness”:

and the relation of the “contents of consciousness” and long term memory:

1 Like

What is the logic of learning exceptions? I ask myself.
1/ Exceptions are rarer than the more general thing. By chance you will generally learn them later. You capture the conditional probabilities in a crude way.
2/ There is a process of inclusion and exclusion going on. You learn an exception but it is likely too broad and you must learn what to exclude from that exception.
3/ You get a list of alternative answers by following the chain of exceptions. You don’t have to fully accept what was seen in the data at the most extreme exception.
4/ You may choose only to add an exception only after it has been encountered a number of times. Like synapse building only after a number of repeat events.
5/ You can extract sequences of exceptions as trigger patterns, that maybe you can do further learning from. As in going from letter representations to word representations.