Survival, goal based learning and HTM

In reading this forum, Jeff’s book and watching rhyolight’s excellent video series, I do not think I came across views of goal based action, learning and adaptation.

Did I just miss it?

When we think of intelligence in nature it is always coupled to using smarts to survive: navigate, find food, avoid being food etc. These are linked to goals and achieving goals.

Where can I go for info on how Numenta’s past and recent research links to achieving goals and adapting/lerning in order to achieve goals?

If I wanted to go about creating agents in a survival game based on HTM and related research, with the agents having survival goals, can “see” and “hear”, eat, die etc. where would one start gathering info?

Tx for feedback.
Lionel

I think envisioning AI as a program dealing with goals and deciding of subgoals, and refining subsubgoals, is one of those ‘obvious’ intuitions we may have, and thus something that AI tried hard to model, right from the beginning.

In ‘On Intelligence’, JH relates that while searching for another definition, it came to him that the problematic of ‘prediction’ was more probably the heart of the matter. HTM settles on that.

linking back the prediction viewpoint with goal-oriented design would both require, in my view, that :

  • HTM reaches a state where there are somewhat emergent phenomenons, to the core prediction sim
  • we have, around this, a model for older parts of the brain, able to steer for things that a living being would fundamentally want

(although the two, together, is maybe not something humanity should like to experiment with too early…)

3 Likes

My take (not at all part of the HTM canon) is that the cortex plays a very indirect role in goals and decisions.

I see the lower brain structures as filling these functions.

The cortex acts to monitor the actions (back and sides of the cortex) of these older brain structures and influencing (forebrain) which motor plan is selected.

Because of this relationship researchers have to do the work of putting the problems in as training and test data and to interpret the outputs in some meaningful way.

You can take away the cortex and you get a lizard brain. You take away the lizard brain and you don’t have much of anything.

I was just chatting this morning with @gmirey about this exact issue and the chasm that has to be bridged to transform a collection of HTM modules into to a stand-alone functioning system. It’s a big one.

It is not a trivial problem and at this time - I don’t see a lot of attention in this area.

2 Likes

There are two main systems. The fist deals with energy
management, pain and damage management, and pattern finding.

The second is copying surviving adults or parents behavior. That was found with “finding patters” from the fist reward system.
This is imprinted onto the mind automatically. Whether you want it or not.
When a human become an adult a anomaly detector is activates. When
a person is not behaving like a clone of a parent anomaly detector activates, a anti
reward is generated.

SDR leans to detect sub features first. Then next, objects made with those sub
features.
Skipping to the end, complex temporal patterns detectors will come into existence.
Constructed form all of the smaller, re purposed, detectors. Mon and Dad patterns will be there.

Sometimes I wonder whether L2/3/slender tufted L5 corticostriatal outputs are the real cortical motor outputs and L5 TT cells just modulate things or serve some less action driving function. I think I read somewhere that corticostriatal outputs evolved first, but I’m not sure.

Motor cortex projects directly to the spinal cord, but the sensory input to motor cortex is mostly motor. Primary auditory cortex L5 projects to the structure which provides input via thalamus, inferior colliculus. Barrel cortex projects to trigeminal sensory nuclei. So maybe motor cortex L5 thick-tufted is no different, fine tuning inputs to motor cortex which just happen to indirectly be from motor sources (muscles).

Thank you for the replies all.

Apologies, I do not know much of the biology yet.

Yes, my understanding is also that the neocoretx is not involved in goal setting. The neocortex is a learning and prediction machine.

Inputs to neocortex:

  • current state of world model and physical position in it
  • current sensory input
  • biological needs input (hungry, tired, thirsty)
  • current state of safety / danger based on combination of current world model and current sensory input
  • current state of social model and social hierarchy position in it - emotional input

and then

  • a current goal in terms of increasing / decreasing the current prioritised metric (get food, beer, impress boss…) - being conscious?

I see emotions as our evolved mechanism of co-opetition, which I jokingly summarise as: cooperation is a game for many, procreation is a game for two… so we must always maintain a fine balance between helping each other and looking out for number one. Envy, shame, disgust, love, hate…

The goal of course is to decrease danger, hunger, thirst, tiredness and increase social standing - in a balanced way. In terms of consciousness it feels to me like it is the mechanism whereby the current priority goal is addressed.

So there is a feedback loop on whether these values were pushed in the right direction by recent actions in terms of the current prioritised goal.

All that being said, as far as the neocortex is concerned all these are just inputs to its learning and prediction machine. If you somehow connected the neocortex to every traffic light in the world and a feedback loop for improving traffic flow, it would happily do that.

Beyond input patterns, predictions and feedback the neocortex does not know what it is learning and predicting. Hmm… big statement that I guess: “we are just complex biological machines that learn patterns.”

In terms of AGI and us being able to create smarter machines, my layman’s view is that it’s the neocortex that is important. The rest of the brain is specific to the human condition and humans needs - to survive, procreate etc. What we need now in terms of smarter machines is a neocortex that can learn and predict what we want it to.

So in terms of agent based games and a neocortex/HTM approach to playing them well, could one wire up something like I describe above, with the system choosing the right current goal, predicting and executing next action, and learning on whether things got better or not?

Apologies if I am rambling or rehashing stuff obvious to the community.

3 Likes

I would extend what you are saying with companion structures that feed stuff in part of the cortex (encoders) and reads related states out (motor drive in the prefrontal cortex)

Note that in babies there are phases of babbling (same thing in motor movements) where random pattern explorations are input to motor drives are paired with related sensory inputs. In fact, there are massive feedback loops between body sensors and motor drivers that are trained at this point.

2 Likes

Staring form a baby and taking baby step i am in much agreement with that.
From my AGI model the fist thing it does is aggressively mental forecasting the
world around in a passive fetal position. Using very little motor movement. This the
same as building a large temporal data base. Then with in the data base look for
repeating patterns.
This is where the AGI would get its fist reward and would be embedded in the pattern
as meta data. A memory made of dead silicon.
Anomaly detector would be used here. when all patterns are found
the detector would stop activating. Then it would be time to move on to the next phase
of using motors improve the found pattern loop.

In the wet wear, the real brain , living neurons hold the pattern loops. Such as getting
up form the chair walling over to the frig and getting a beer and the waling back
to the chair. There are these bunch of living neurons hold this pattern. A single neuron
A fragment of the pattern. Befor this pattern was fist learned the neuron was a un purposed neuron waiting to be deleted as third wheel in a collective swarm of cell
always bickering over resources.
So this living cell has a reason to exist. Causing a reward
chemicals to be generated. A reinforcement celebration for this living cell. That can look
inside it self an see that it hold a fragment of a conscious program.

A synopsis of my complete AGI model based on my complete model of human
psychology:

https://groups.google.com/forum/#!topic/artificial-general-intelligence/0rHVcqNoFG8

They are learning about space itself through motion and sensation. It is amazing.

2 Likes