Thank you for the replies all.
Apologies, I do not know much of the biology yet.
Yes, my understanding is also that the neocoretx is not involved in goal setting. The neocortex is a learning and prediction machine.
Inputs to neocortex:
- current state of world model and physical position in it
- current sensory input
- biological needs input (hungry, tired, thirsty)
- current state of safety / danger based on combination of current world model and current sensory input
- current state of social model and social hierarchy position in it - emotional input
and then
- a current goal in terms of increasing / decreasing the current prioritised metric (get food, beer, impress boss…) - being conscious?
I see emotions as our evolved mechanism of co-opetition, which I jokingly summarise as: cooperation is a game for many, procreation is a game for two… so we must always maintain a fine balance between helping each other and looking out for number one. Envy, shame, disgust, love, hate…
The goal of course is to decrease danger, hunger, thirst, tiredness and increase social standing - in a balanced way. In terms of consciousness it feels to me like it is the mechanism whereby the current priority goal is addressed.
So there is a feedback loop on whether these values were pushed in the right direction by recent actions in terms of the current prioritised goal.
All that being said, as far as the neocortex is concerned all these are just inputs to its learning and prediction machine. If you somehow connected the neocortex to every traffic light in the world and a feedback loop for improving traffic flow, it would happily do that.
Beyond input patterns, predictions and feedback the neocortex does not know what it is learning and predicting. Hmm… big statement that I guess: “we are just complex biological machines that learn patterns.”
In terms of AGI and us being able to create smarter machines, my layman’s view is that it’s the neocortex that is important. The rest of the brain is specific to the human condition and humans needs - to survive, procreate etc. What we need now in terms of smarter machines is a neocortex that can learn and predict what we want it to.
So in terms of agent based games and a neocortex/HTM approach to playing them well, could one wire up something like I describe above, with the system choosing the right current goal, predicting and executing next action, and learning on whether things got better or not?
Apologies if I am rambling or rehashing stuff obvious to the community.