ML and Deep Learning to automatically create AGI?

I certainly agree that food and water are required for reproductive success. But your comment overlooks the question: Is that exploratory behavior motivated by a primal food/water drive or by no-pending-primal-drives, human-level, drive-like curiosity?

An organism that will die without immediately locating food or water would not seem to be curious; it seems to be hungry or thirsty, i.e., motivated by primal drives that must have existed long before curiosity emerged. I don’t see the example as particularly relevant to GI, unlike how I see human-level curiosity with its evolved Aha! signal.

We agree that play behavior fills maps. I agree that information discovered during play behavior builds culture, although I see that some cultural “truths” arise from survival-based drives as well. Have you adopted my view that play is a behavior motivated by curiosity, that is, only when primal drive urgency is minimal?

<Unfortunately, I may not be able to dedicate as much time as I would like to this discussion during the next couple weeks due to prior commitments. Hope I can, but life.>

1 Like

This seems like a chicken or egg proposition.

If the critter is exploring when it is not hungry or thirsty and maps out what is in its territory is that curiosity or some aspect of hunger? Is that exploration without any other currently active drive pure curiosity?

When you search through your memory of location and goals and they match up to your current drive (hunger/thirst/shelter/mate) so you experience some reinforcing signal (AH-HA!) is that internal search part of those drives or some sort of separate general mechanism that services all drives?

I see that you have identified curiosity as a key feature of AGI.

Speech evolved from the intersection of signalling calls and object representation. Once you have shared naming and cultural reinforcement the rest of speech mechanisms fall into place. I bring this up because many of the traits that we would like to attribute to an HGI are in fact properties of the motor programming of speech recognition and subsequent mimicking/production (loaded programs?) and are not actually what might be considered pure hardware.

I place our use of the basic exploration to populate mental maps in the same general category. Yes, in humans it is a very developed trait, but it is an elaboration of a very basic drive - not something that is novel to human level intelligence.

1 Like

Hi, I just need to chime in on the curiosity topic. The way I see things, curiosity is not a drive but a mechanism in service of a drive. A hungry lizard is curious about how to find a grasshopper or any other food satisfying bug, once no more hungry its curiosity shifts towards finding a mate or a shiny spot on a rock to roast in the heat of the sun. Whatever “needs” are in line, curiosity serves them most important first.
Curiosity is the “make wish happening” mechanism of the brain.
And of course the lizard has an “Aha!” moment when it is certain that whatever shady feature in environment its senses spotted as “potential grasshopper” is confirmed to be a grasshopper through further examination: “Aha! it tastes like a grasshopper”.

PS. and of course it

  1. builds learning experience once the “shady feature” was confirmed to be grasshopper it increases grasshopper-probability of that feature and
  2. once a behavior path is learned, curiosity is no longer needed, lizard reacts from learned experience, just grabs the “feature that was proved to be grasshopper”, without bothering to pay much attention.
1 Like

I see we’re struggling with eggs and chickens and definitions, and I’ve done my part to make it a muddle. I apologize.

I would call the mechanism in service of primal drives “exploratory behavior”. Presumably there is a short list of possible EBs in primitive animals. Once an organism has developed enough unfilled neural map space, the possibility of having behavior be non-primally motivated, I would say it has the possibility of higher-level curiosity, assuming it can find moments without primal drive urgency.

No question, as @bkaz highlighted, that there are termination signals for exploratory behavior motivated by hunger/thirst in species lacking cortex. I would distinguish the drives that motivate those behaviors from human-level (or its near antecedents) curiosity.

The way I’d like to see human-level curiosity is as the not-quite-drive-level motivation for what to do when no primal drive is active, i.e., it leads to play, a behavior that fills in the available map space, behavior that terminates when the Map-Space-Is-Populated Aha! signal arises, the MSIP Aha! being qualitatively different from the Aha! signal for a primal drive.

I wish for a better, shared vocabulary. Can someone suggest some clarifying terms?

1 Like

I disagree even Archimedes “Evrika” moment (which is old Greek for “Aha”) would not have happen through aimless playing in the bathtub, his play was motivated by a focused drive, a wish to understand what makes some things float and others sink.

I didn’t mean to suggest that human-level curiosity couldn’t motivate focused investigation. In fact, that’s kind of my point: simpler editions of curiosity-as-motivation drive play behavior to fill in maps. Significantly more evolved, human-level curiosity motivates qualitatively different behavior, given the vast unfilled map space that “demands” filling.

Edit: typo.

1 Like

So perhaps we can reach some agreement that curiosity is yet another thing that has a continuum?

On one end is a critter mapping its territory, the other end, the terrifying AGI that does investigation to a degree that puts Sherlock Holmes to shame and is the stuff of our AGI nightmares? We hope that it is motivated by “the right reasons” so it works for the good of humanity. Of course - this discussion then moves off to what are the “right” motivations.

BTW: you can put a reason for the edit in the top of the edit box, and you can review the history of edits by clicking on the red edit pencil on a post.

1 Like

Curiosity is a drive to build predictive model of environment. It doesn’t need other drives to work, they only distort or inhibit purely cognitive function. It is modulated by dopamine, especially its prefrontal pathway, but this modulation is tonic, basically representing lack of interference by other drives. Whatever subcortical areas were used for exploration by lizards, their function has been taken over by cortex, thalamus, hippocampus. Ok, also cerebellum, but that’s more like a passive storage.

2 Likes

Hmmm, that “passive storage” stores sequences and mediates conflicting inputs to join them together. It plays those sequences back on command from multiple places in the forebrain and plays them INTO the forebrain. About 1/2 of the number of the cells in the brain are allocated to this structure.

It turns our “chords of thought” (distributed parallel representation) into sequences of thought.

You may want to rethink the importance of this structure to the overall function of us clever humans.

Please see:

1 Like

Ok, nothing in the brain is passive, I meant much more passive than cortex. Basically, much shorter-range search. Hence the number of cells: more memory, but less (re)processing.

Yes, that’s what I meant by distorting purely cognitive function.

I am not sure how one might arrive at a judgment of good or bad purely from a cognitive exercise.

As I see it working - Anatomy of a thought

  • There is some drive from the hypothalamus. As discussed above, this could be any drive, big or small, originating from any one node of the hypothalamic cluster. The voting on which drive is the most important is resolved at the hypothalamic level and is presented to the cortex as the most important thing. We really don’t multitask, but we task switch really fast.
  • This drive is unwrapped in the forebrain. This demand from the subcortical structure is processed much like any sensation in any sensory input and parsed for content and resolved into cortex compatible features.
  • As this is developed into a drive this could end up being a command to the body. The map contents ripple up from the lower forebrain in the general direction of the central sulcus.
  • Part of what is learned by the cortex/cerebellum is what commands go where. Some do go to the body, some are directed inward to the rest of the brain. These patterns are sent as a distributed pattern to the cerebellum.
  • The cerebellum has learned to take these parallel distributed patterns and turn them into sequences. Part of the input to the cerebellum is the destination of the learned sequence. The output sequence could be the body OR various parts of the sensory stream. Note that this output is the deep cortex “motor drive” axons associated with the feedback path.
  • The WHAT/WHERE stream is driven with fragments of previous inputs that unfold into the stored representation to be recalled. This recall is processed through the WHAT/WHERE stream back up to the temporal lobe.
  • In the temporal lobe this “experience” is processed with the same system that processes any other sensation even though it is triggered internally.
  • As this ripples up the WHAT/WHERE stream the two streams (feedforward/feedback) are evaluated for a match with the unfolding need state (that hypothalamus thing again). This can trigger an AH-HA experience if a global workspace is ignited.
  • If there is a match we evaluate its goodness based on the value stored with the memory.
  • If it is not what we are looking for the process is repeated.

This is the core of reflective thoughts.

There may be stuff in the cortex but the cerebellum is the driver of the search engine and the weighted contents (search termination) is the GW ignition based on these recalled contents. Notice that much of this process is both initiated by structure outside the cortex and supervised by the lizard brain.

I have no idea how you think a non-lizard brain engine is supposed to work but I think it would end up having the same limitations that is commonly associated with the current AI projects - no common sense. The weighting that the limbic system adds to every episodic memory gives an automatic cue if something is a good idea - there is no need to try and figure it out in every situation from first principles.

4 Likes

Good: increase in predictive power of the system, = substrate capacity * projected input compression.
Last term is per unit of capacity (memory + processing) and inputs / sources are selected to maximize lossless component of compression. It’s far more abstract than 4Fs, but that doesn’t make it any less real.

How does one know if it increases the predictive power without testing it?
You can predict an infinite number of possibilities.

With emotional weighing you can test the components of a prediction on the fly and get positive or negative emotion as the memories are probed in an interactive sense.

Without this guidance you have no restrictions on the search space, silly outcome are just as valid as anything else. Children wish for impractical things but as their world knowledge increases they tend to pick more logical things from the search space of ideas.

As far as some test that is based on the regularity of the world or how well it compresses - the world is full of logical inconsistencies we take in stride. While spock may say - it does not compute - my brain just says “OK - that is the way it is” and deals with it.

Past experience is already a test, not qualitatively different from future. Prediction is simply a temporal aspect of compression: of future input. Note that I specified lossless component: the amount of original input that can be reconstructed from representation.

I have outlined the general outline of how though progresses in my model.
Do you have a similar big picture description of how thoughts work in your model?

If it helps - say in a social interaction with you in your office.
You are discussion the latest council directive on something like garbage collection vs recycling.

In the brain, its just neocortex, + cortico-cortical intermediates (thalamus, hippocampus, cerebellum), + tonic prefrontal dopamine. Basically a feedforward and feedback flows through cortical hierarchy, nothing else. As for how it should work in a properly designed system, see my intro.
This is about “pure” thought, no emotional intervention, with largely silent limbic system and below.

Sounds like the formula for SKYNET eliminating humanity because “it’s logical” and no guiding value judgement.

This is a purely cognitive component, adding reinforcing values is separate issue.

You get that automatically with built in emotional coloring. Each recalled component is tagged with judgement as distributed as any other aspect of the stored memory; it is all distributed without having to maintain some parallel system.

I am not sure how some parallel evaluation system would stay in synchronization with a purely logical calculating system any other way.

Please note that humans have tried to make formal logical systems for as long as we have recorded history and all have been abysmal failures. Gödel goes as far as saying that it is a fool’s errand, and proves it mathematically.

2 Likes