ML and Deep Learning to automatically create AGI?

I wholeheartedly agree!

Right, but I think it’s a lot easier to find it through developmental neuroscience. Such unit must genetically determined, vs. learned. Which means that even lab-grown brain organoids should have it, if they get them to develop long enough. You can’t test them on any specific task, but we can do a structural pattern discovery. In adult brain, it’s very hard to distinguish innate from acquired.

2 Likes

I think you actually have to remove pressure to evolve GI.

Here’s my thinking. It’s a bit rough at this stage, so please be gentle.

TLDR: Give it a generalized reward mechanism for resolving a problem, and a “motivation” to treat some environmental states, as problems, e.g., threats, rivals, lack of food, lack of mate.

Longer version:
It seems to me that the way we achieved human GI was by learning to solve lots of Specific Intelligence problems, then a few individuals with plenty of time in which to not flee, not fight, not pursue food or sex, etc., became habituated to the evolutionarily provided positive reinforcement mechanism that gives the Aha! reward when we resolve a query, answer a question, solve a problem.

I think this signal probably arises from the mechanism for recognizing when we have achieved those basic threat/rival/food/sex needs, but I suspect evolution has optimized the original signal in the presence of lots of non-primally motivated time. Keeping motivated and active when all basic needs are satisfied can be difficult, at least for some people, so we see people pursuing adrenaline rushes, food over-satiety, political power, sex addiction, substance addiction, and even abstract learning. All of these goal-seeking behaviors must be physiologically motivated.

Becoming junkies for that reward signal, which must exist for us to know when the goal is achieved, created the motivation and evolutionary reinforcement (because solving and avoiding more problems is evolutionarily advantageous) to apply goal-seeking-thus-problem-solving behavior from specific contexts to as many contexts as possible, so many that we have come to speak of it as General Intelligence.

Some might ask how we get an AGI to recognize new problems to which to apply itself. Well, our human GI recognizes new problems to try to solve because the evolutionarily enhanced reward signal has optimized for that behavior. Presumably, a reward signal for problem-solving in an AGI could perform a similar role.

1 Like

I believe this sums up the thrust of your assertions:

Thanks, but you seem to have missed what I’m saying.

My suggestion is that a way to get AGI is by implementing a human-similar reward for problem-solving. I do base my suggestion more or less on many of the needs Maslov offers, but don’t require his hierarchy or discussion of how to resolve conflicts among them, only that they are sometimes all, together-at-once non-urgent. I don’t recall Maslov discussing an evolutionarily developed need for problem-solving nor a reward signal for it.

1 Like

The levels can be weighted based on how many are satisfied. The rewards don’t have to be static.

In this post I suggest that the lizard brain has a collection of states and as these states are asserted the cortex is essentially reconfigured. The success of each state should be evaluated independently to cut the evolution time.

I cold see this general mechanism being applied to the organization implied by Maslow’s pyramid.

1 Like

Right. There must be a reward system. In particular there must be a reward signal, dynamic of course, for problem-solving activity, whether it’s at the level of a unicellular organism activating its cilia to move from a threat or object recognition by a fish brain. That reward signal for achieving a goal must exist. Without it, the goal is never achieved; the problem is never solved.

That signal, selected for by evolution because solving problems promoted survival, because it is a reward, eventually began to serve as a driver for some humans’ behavior. Presumably, those super-solvers had kids at a favorable rate, too.

Thus, the ancient End-of-Seeking signal from ancient goal-seeking behavioral mechanisms (originally based on primal drives) evolves to become the explanation for how humans apply Specific Intelligence in new contexts, which to me sounds like General Intelligence.

I propose that AGI might emulate that reward signal.

(Neurological levels? Maslov’s levels? Again, not interested in Maslov’s except when none of them are above threshold of requiring a response more “urgent” than the at-that-moment-ongoing pursuit of the problem-solving goal.)

Edit: Just saw your edit. I’ll read it now.

More edit:
Nice writeup.

Where we seem to have not communicated is whether the increase of other drives is the only terminator for the presently active drive. I claim there’s an additional signal needed for the Aha! reward, and that it is the basis for the development of GI.

2 Likes

Example: drive hunger triggered by blood sugar moderately low and not engaged in higher priority such as fleeing predator.

Find food task
Eat food task
Terminator conditions gut full (leptin/ghrelin) or other condition rising above blood sugar trigger threshold . I think every drive has a terminal condition.

Each trigger condition could have variable thresholds. For example - very low food could overcome social drives. See: Les Mis!

Even social drives are still modulated by the lizard brain.

Curiosity serves to populate the goal map, and play - to populate the skills maps.

Oh - and that “ah ha” thing? Making the connection between drive and solution.

1 Like

I absolutely agree that subcortical structures are critical to motivation and behavior. I’m encouraged that you agree that every drive must have termination signal.

I agree that there can be different satisfaction triggers for lizard-level drives. I believe at some level of development there is a recognition of how the pleasurable sensation that a full belly creates is at least a little like other pleasurable sensations, for example, safe sleeping quarters. When the organism is “idle”, it can pursue one of the behaviors on its pleasurable list.

I suggest that in humans there is a more synthetic signal, one that gives rise to the “Aha!” sensation in mental problem solving. This signal promoted the behavior we call curiosity.

I didn’t discuss, but I believe the emergence of the Aha! signal is related to the need to process “mental” objects that emerge in social relationships, the most complex of which seem to be human. It’s not surprising (to me) that only humans might evolve the signal so highly.

My proposition only requires that signal and enough “down-time” from primal drives for the emergence of curiosity.

Curiosity expands the goal map greatly, but the goal map was populated before there was curiosity; before there were brains, organisms had goals.

Edit:
I see you’ve edited again. Could you maybe please start a new post for new information while we’re typing back and forth?

I also edited for typos.

1 Like

Yes. Evolve that, and you get curiosity which is pretty close to GI, right?

Sorry about the multi-edit thing. When I am out walking it is difficult to check resources and I enter data that is all related in sprints. I happen to be out walking the trails at night at this moment.

Also - when I am driving around doing chores I have to set the phone down ( no distracted driving) and I close the edit window then.

Also - I spell check after posting and often see I missed a point I had intended to cover so it get added. Like this line.

1 Like

As far as curiosity - lizards explore and establish territory.

This drive may be highly developed in humans but it is still an old-brain thing.

Any critter that has maps to goals will act to populate those maps with basic need goals like food, water, and shelter.

We humans are such chauvinists and tend to ignore that our most prized abilities are found in some form or other throughout the animal kingdom.

1 Like

I usually agree very strongly that human chauvinism blinds us to the capabilities of animals with less cortex, but here you seem to be suggesting that lizard’s explorative behavior is equivalent to GI? I don’t agree.

Agreed, explorative behavior arises in lizards and in people when there are no more pressing drives. And lizards can apply skills from one activity to another in some limited ways. But only humans have generalized and developed curiosity to the extent that we have, at least for tool-making.

Something is responsible for the difference in scope that makes human-scale curiosity roughly equivalent to what we’re calling GI. If it’s not evolutionary refinement of the “Aha!” signal, what do you propose is responsible for the vast difference?

Edit: I hit send too soon. Sorry. I’d delete this post if I could. I’m hoping I can sneak in some edits before you see this. Sigh. Violating my own request in the first post after I requested it. Very sorry for the hypocrisy.

1 Like

No harm, no foul; I look forward to seeing your edit - when you do delete the edit thing and I will delete this bit. As for this post - I’m done now.

I am in no way suggesting that the drive to explore and the resulting curiosity is equivalent in lizards and humans. There is a continuum and this is clearly more developed in humans. Some of this is cultural and not in the hardware. I offer the example of speech and all it bring to the human mental machinery. Without speech and the mental tricks it brings you lack the ability to form and manipulate certain mental constructs.

I do see AGI as far more than curiosity; I see that AGI will be a cluster of traits that must be taken together to seem intelligent.

  • Intentionallity.
  • build in behaviors to act as a framework for development.
  • Memory. (it seems that several kinds will be needed)
  • Task switching (with a minimum set of goals)
  • Self awareness.
  • Emotions of some sort (necessary for judgement)
  • I’m sure if you think about it you will add a few more things to this list.

I have been posting on various aspect of this for a long time on these topics in this forum. I posted pointers to about half of them in this thread.

You may also want to read the thread " Two Types of Hierarchies". There is some discussion relevant to this topic there.

As far as the “AH ha” experience, see Global Neuronal Workspace.
To see this in it’s proper place, see this post.

1 Like

I think that something is hierarchically recursive pattern representation, enabled by neocortex.
Feedback of these hierarchical patterns is what guides exploration, physical or introspective.

2 Likes

@Bitking: Thanks for your encouragement to post the salient points from our recent emails. I think these are the points I made.

Solving the problem/query that curiosity generates produces an evolutionarily advantageous, behavior-reinforcing response. When curiosity’s reinforcement response is made more powerful curiosity can motivate behavior in a way that emotion does.

It’s possible to see this Aha!-response addiction driving some aspects of brain structure. The neural infrastructure that supports the evolutionary development of curiosity must support vast imagination capabilities as the range of consideration grows, optimally engaging as much of the neural population as allowed by physical constraints. Thus layer II/III’s long-range connections. But I do not make that claim at this time.

GWT seems fairly successful at modeling consciousness’s apparent serial nature, but people can hold conversations while folding laundry, so behavior is not entirely serial. I do need to study more about GWT.

Last, note that individual corvids, elephants, and top-of-food-chain predators, which are other notably curious animal groups, also have significant “free” time during which evolution could have sharpened the Aha! response.

2 Likes

@yogaman

I see curiosity as a drive to populate maps. In “lower” animals that is simple spatial maps of the environment (food, water, shelter, cached food) , predictor/prey facts, communications calls, and perhaps some social environment. (birds & pecking order?) These are learned extension of the built-in genetic programming.

As the number of interconnected maps increase you have the ability to form higher dimensional internal maps where the same type of objects have more abstract extents. These maps have the unique feature that you can explore them without having to move your body. These maps also need to be populated, hence the thing we call play and curiosity - the need to populate features about goal objects. As our representations become more abstract the definition of a goal object expands.

The Interpretation of GWT is a general mechanism. I don’t see that it has to engage the entire cortex with every activation. There are in excess of 100 maps, with two almost identical halves of the brain available. I could see local pockets of a few maps coalescing on some cluster of features & goal, with a different pocket elsewhere processing some other cluster of features & goal.

I think that the AH-HA feeling happens when a large number of maps fall into enlightenment with some (internally) important goal. These features could well be feature and relation aspects distributed through higher dimensional internal representations.

In case I did not make the point clearly - play and curiosity are drives like food, water, shelter, and reproduction.

1 Like

I agree that play fills maps, but I see play as a behavior motivated by curiosity.

Curiosity seems in a different class from primal drives such as food/water/shelter (shelter includes fleeing threats, right?). If those drives are not satisfied, the organism dies, yielding no (more) progeny. If reproduction is not satisfied before the end of an individual’s life, likewise, no progeny. If curiosity is not satisfied, the probability of direct impact to the individual’s reproductive success is minimal.

Consider also: In the simplest animals, enough information about how to select among possible behaviors seems to be available from birth. One might say their “maps” are pre-filled. (Maybe someone should carefully observe hydra for signs of play to test this idea.)

Curiosity as a motivation for play requires unfilled neural maps (assuming that neural maps are the repository for drives in more cortical animals). Simpler brains apparently have fewer maps to fill than human brains. It seems logical that the curiosity “drive” would be correspondingly weaker in those animals, unlike primal food/water/shelter/reproduction which would seem to be just as important to simple and complex organisms.

So it seems to me that the unique characteristics of curiosity as a motivator for behavior require an explanation for those differences. This has motivated my proposal for its amplification via the Aha! signal in free-time species.

Your suggestion to allow for >100 maps seems to me to be an excellent enhancement to GWT, again given that I admit need for further study of GWT. If you have also considered how those maps interact, please share your thoughts.

I’m not sure you’ve significantly clarified the Aha!/AH-HA signal by saying it “happens when a large number of maps fall into enlightenment (perhaps you meant alignment here?) with some (internally) important goal”, more than just saying it signals the solving of the problem/query. What am I missing?

(I’m also not sure the antecedent for “These” in “These features could well be feature and…” or whether it’s more than tangential to your multi-map proposal.)

Finally, I’m making assumptions about what you mean by maps. Would you kindly point to or summarize a decent definition?

1 Like

Cortical maps 101:

Region is a different name for a map:

And more about how they are connected:

1 Like

The “standard” model of the cortex has a bidirectional flow of connections, one from the sensory areas towards the temporal lobe and frontal lobes, and the other roughly from the frontal lobes to the central sulcus with many projections back to the sensory areas. These projections from the frontal lobe are associated with goal states, and the sensory areas, perception of the environment.