Does Numenta have model or implementation for thinking?

Does Numenta have models on how thinking or thought works within the brain or even better, an implementation for that model?

I know about the 'Thousand Brain Theory of Intelligence ’ - https://numenta.com/blog/2019/01/16/the-thousand-brains-theory-of-intelligence/

which talks about structures that could support thought/thinking - each Corticol column builds a model of the world based on sensory input and connections in the cortex could allow columns to work together to identify objects, etc

This is the definition of thinking from Wikipedia -
‘Thinking allows humans to make sense of, interpret, represent or model the world they experience, and to make predictions about that world’

Does Numenta have any implementations for this process(thinking)? Makes me wonder too - how can neurons be wired to store a sequence in time or store a thought like the shoe is red

Maybe in the ‘shoe is red’ case, there’s two Corticol columns. One of shoe. One for red. When the thought is formed, there is a link between the two Corticol columns.

1 Like

Thinking is a very generic and confusing term, I think :wink:. But let’s use the Wikipedia definition to continue this discussion.

Thinking allows humans to make sense of, interpret, represent or model the world they experience, and to make predictions about that world

In our theory, the only way to accomplish this is through movement. All our papers and models so far have made some basic assumptions about movement in order to test our theories of object representation storage and retrieval. So we do some hand-waving and assume that movement can be somehow represented in a semantic vector.

Assuming that thinking involves some kind of movement through a mental space, no, we do not have an “implementation of thinking”. We do, however, have a biologically inspired theory of sequence memory upon which rich representations can be built through movement. How does the movement work? How it is actually represented? How does cortex generate movement output? These are open questions.

3 Likes

No, I don’t think so.
This is such a big question. Numenta is going after understanding the brain at a small scale. It is aiming towards understanding and modeling the firing of neurons and studying how the signals propagate throughout, and getting biologically accurate mappings of the layers. This is an enormously challenging problem in its own right, and is one that is being pursued by other research organizations.
There are many other stages even if this one were completely understood. Thinking is such a complicated activity. There different types of thinking. For example, suppose that you had axiomatic level the formulations of quantum physics and knew it was complete. Maybe you could derive some new formulates and understanding. I don’t know if this would be sufficient to understand and formulate mathematics, which is far more open, in that it doesn’t have to model our reality. I think that philosophy is even more difficult, where you have to model Kant’s “Pure Reason”, and adjust the formulation for mistakes and problems and ask yourself if that is going to help you. Then there come problems with “conscientiousness”. However, this would not be enough to understand the world at a higher scale. I watched a video where someone discussed that QFT does not directly model “tigers”. It could, but “tigers” are so specific to our perception. Do you just add “tigers, bowling balls, and WWII history” in as observations of reality, or do you try to ground them in some sort of academic framework in which these are just one subset.
Before I ran across Numenta, I was looking at Goertzel’s OpenAI project and asked the same question, how to model thinking. I don’t think even OpenAI does? This is a hard problem. It really doesn’t formulate higher level concepts.

Thought is autological. It can only be defined in terms of itself. If machines are ever to truly think the way humans do, that thinking will have to arise spontaneously from the parts they are built with. In attempting to understand what thought is, one has to evolve from dwelling about past thoughts to dwelling about recent thoughts to dwelling about the present thought, and that’s when everything breaks down. Anything that deals with understanding the ‘moment’ – consciousness, thinking, self-awareness, etc. – we’ll never figure it out because it is the fabric of what we are, actually of the whole universe, which exists within each one of us.

Having said this, I do hope and believe we will be able to understand the components of thought.

This is not the official Numenta position but I have been promoting this model for a while now:

The general model is here and in the following thread:


Zooming in on the core operation experienced as consciousness:

And a little bit of musing on the evolution of the contents of consciousness that you may think of as thinking:

The model does use HTM cortical computation as the computing fabric in the cortex.