How are higher concepts represented in 1000 brains theory?

I understand 1000 brains theory fully. I get how macrocoolumns build a model of their sensory input by recieving not only a sensory input of their receptor field, but also a copy of a reference frame which gives that sensory input an identity. obviously, a macrocolumn can integrate temporally and spatially to build precise models of its inputs. through propgation of these models and a whole cortical voting brain not only solves the binding problem, but creates unity by accepting a vote within each mental frame. I do also understand, that given the location of these models and their sensory input you may build a model of yourself (refrence frames are your own motor movements and your tactile sensing) or objects (refrence frames are locations on objects through grid cell like stuff in each column or maybe somewhere else).

the part that i am really curious to know is what jeff thinks about how these high resolution models transition to much higher concepts and abstract ideas. of course, one can say hirerarchy (tiny models and their relationships build up through their way to PFC, and form higher order thinking) but every time i truly start to think about higher order thought, I face a block on how the mechanism might work?

I’d love to know jeff’s idea about how exactly this transition happens. His book provide some quick thoughts but he never speculates deep enough. I’d love to know his thinking about the exact mechanism of this transition, even if it’s not fully formed theories yet or at wild conjecture state?

1 Like

I would like that too.
BTW you can look for “Image schemas” and metaphors … there are clear connection with sense-motor loop… the question is how

I have some thoughts on this topic.

Basically, the cortex is really useful, and I think the brain is routing all sorts of non-sensory data into the frontal cortex, because its useful to build models of things.

There is a part of the brain that does reinforcement learning (RL): the basal ganglia (BG). Now I’m going to assume you know what RL is and why it’s useful.

The basal ganglia, in the process of implementing an RL algorithm, generates a huge amount of internal data.

  • This is information which is highly relevent to the RL algorithm and which does not exist outside of the BG.
  • The BG contains information about both the concrete world of your senses as well as your subjective assesment of its value.

My hypothesis is that the frontal cortex is processing the BG’s internal data, as feed forward input.

  • I think that the frontal cortex is building a model of how world works, except unlike the model in the sensory cortex, this model only contains things that are relevant to your behavior, because its inputs have been filtered for behavioral relevency by the BG.
  • The frontal cortex and BG are part of a positive feedback loop.
  • To answer your question: Abstract ideas show up in the frontal cortex.

I heard that it was openly stated that HTM/TBT has NOT yet figured out the “hierarchy” part (neither did anyone else on this planet), even though “hierarchy” was strongly emphasized in Jeff Hawkins’ 2004 book “On Intelligence”.

It might be the case that nobody on this planet has figured out how to represent “symbols” (like digits, or letters) in a connectionist neural net, and establish feasible mechanisms from there up for abstract concept processing, which seems to be what you (and I, and many other curious soles I believe) are curious about.

Here is a wild guess - Numenta’s research agenda has been kind of kidnapped/hijacked by the explosive success of Deep Learning, so much so that it was pressured to adjust its research roadmap to focus on large scale numerical processing style of pattern detection/processing, instead of exploring abstract thinking (which has to involve symbolic representation … which may have never been on the agenda anyways)

People tend to forget a simple fact – a human mind has never been very good at large scale numerical processing (try to mentally multiply two 6-digit integers), instead its specialty has always been about abstract thinking/generalization/logical reasoning/intuition/creativity… Mechanism? we don’t know, even though it runs above our shoulder every waking hour.

kidnapped/hijacked by the explosive success of Deep Learning, so much so that it was pressured to adjust its research roadmap to focus on large scale numerical processing style of pattern detection/processing, instead of exploring abstract thinking

As far as I can tell, this is an adjacent stream of work within Numenta that they hired for, not replacing the core research goal of understanding cortical function and building a deliberately “biologically constrained” intelligence algorithm.