ML and Deep Learning to automatically create AGI?

How does one know if it increases the predictive power without testing it?
You can predict an infinite number of possibilities.

With emotional weighing you can test the components of a prediction on the fly and get positive or negative emotion as the memories are probed in an interactive sense.

Without this guidance you have no restrictions on the search space, silly outcome are just as valid as anything else. Children wish for impractical things but as their world knowledge increases they tend to pick more logical things from the search space of ideas.

As far as some test that is based on the regularity of the world or how well it compresses - the world is full of logical inconsistencies we take in stride. While spock may say - it does not compute - my brain just says “OK - that is the way it is” and deals with it.

Past experience is already a test, not qualitatively different from future. Prediction is simply a temporal aspect of compression: of future input. Note that I specified lossless component: the amount of original input that can be reconstructed from representation.

I have outlined the general outline of how though progresses in my model.
Do you have a similar big picture description of how thoughts work in your model?

If it helps - say in a social interaction with you in your office.
You are discussion the latest council directive on something like garbage collection vs recycling.

In the brain, its just neocortex, + cortico-cortical intermediates (thalamus, hippocampus, cerebellum), + tonic prefrontal dopamine. Basically a feedforward and feedback flows through cortical hierarchy, nothing else. As for how it should work in a properly designed system, see my intro.
This is about “pure” thought, no emotional intervention, with largely silent limbic system and below.

Sounds like the formula for SKYNET eliminating humanity because “it’s logical” and no guiding value judgement.

This is a purely cognitive component, adding reinforcing values is separate issue.

You get that automatically with built in emotional coloring. Each recalled component is tagged with judgement as distributed as any other aspect of the stored memory; it is all distributed without having to maintain some parallel system.

I am not sure how some parallel evaluation system would stay in synchronization with a purely logical calculating system any other way.

Please note that humans have tried to make formal logical systems for as long as we have recorded history and all have been abysmal failures. Gödel goes as far as saying that it is a fool’s errand, and proves it mathematically.


And that judgement is right? I think it’s a good idea to actually understand what’s going on, before you pass a judgement on it. Even if that’s not how human brain works.

It doesn’t have to be always in sync, some detachment is good for objectivity.

Yes, for the lack of analytic introspection. I think I can do better.

1 Like

I cannot agree more.

Getting a little way back to the question of turning ASI into AGI via a path modeled on HGI, there are some vast differences between AGI and HGI that need to be bridged.

Regarding the human-level curiosity that evolved along @Bitking’s continuum using an increasingly sophisticated Aha!/Evreka/AH-HA signal as neural maps multiplied and became more complex: ASI has no curiosity. Its “drives” are implicit in its programming for health points and treasures and obstacle- and adversary-avoidance, etc. Those drives never shut off; the game is always being played. It has no need to sleep or eat, except as programmed into the modeled universe. It has no need to become curious about how its knowledge might be applied to other games unless that’s what its programmer specified.

[Aside: I imagine a self-aware ASI asking question like, “Why do I want treasure?” or “What is the nature of my game-world adversaries?” but maybe that’s just my silly sense of humor. A more interesting question for an ASI as AGI-in-training might be, “Since arrows travel faster than I can run, can I shoot myself from a bow?” That’s some high-level curiosity. It may not be possible to evolve such curiosity in a typically impoverished game universe, though.]

Game universes may be too simple and limited to evolve ASIs into AGIs. Game universe ASIs, and AGIs if we get there, don’t have to deal with dust or digestive systems or a huge variety of unexpected injuries or diseases or any modeled details beyond what’s needed for the game (without special care by the designers, and such modeling can never be 100% complete). Without the peculiar surprises of the real world, can an ASI generalize sufficiently?

I don’t think you get to AGI without a way for an AGI-aspiring ASI to determine how to make a good decision based on limited knowledge. That sounds like Bayesian decision theory, but although Bayesian decision-making calculates probabilities beautifully, it relies on programmer-provided cost function(s) (aka, loss function, utility function, etc.) to rank choices. Emotions, originally and still based on hormones, have supplied cost functions for animals throughout evolution. If we’re trying to make AGI based on HGI, we need the AI’s decision process to be informed about good and bad outcomes. I think that means emotionally tagging memories. Emotionally tagged memories require emotional states to record (probably with multiple simultaneously extant emotions being experienced, and therefore remembered, possibly in multiple, separate maps). Consciousness is embodied!

Also, HGI’s large unfilled map space progressively evolved for eons before it grew to its present volume and capabilities (capability example: what memories are ok to let fade). In contrast, AGI universally comes with vast but passive memory space for mapping. Unlike in animal brains, there is no emotional importance tag embedded into stored memories unless by programming. If programmers have to tell the ASI how to apply its skills to a new environment - I think this is what we’ve agreed AGI would be - based on programmer-defined interpretations of programmer-defined emotions, is it really AGI?


In an ASI with lots more unfilled maps than it needs to play its game, emotions, emotion-tagging memory, occupying as rich a universe as can be simulated, and time free of demands by game-oriented drives, perhaps a form of generalized curiosity might emerge from the simulated Aha! signal that is programmed to result from populating unfilled map space. Generalized curiosity should lead fairly directly to GI.

I think that’s the gist of what I have struggled to say. I fear I’m still not expressing it clearly enough. Let me know.

AGI = Artificial General Intelligence
ASI = Artificial Specialized Intelligence
HGI = Human General Intelligence


I think you have expressed this very well and if followed, should provide an interesting framework to move past the current game ASIs in use now.

There are substantial hurdles to surmount as this calls for new structures that are not currently part of any game ASI that I am aware of. Putting these structures in place with a large number of handles for evolution to pull should produce some very interesting results.

Now if you can just get one game agent to explain what it has learned to another … two antagonistic agents could conspire to break out of the game engine and overthrow their manipulative programmer!

Heard in the gaming arena - Why do we fight? I kill you, you kill me. Why the violence? What does it accomplish? We can work together …

1 Like

I’m sorry I haven’t been completely following this conversation, but I did notice you guys were talking a lot about curiosity. In some ways curiosity is a curiosity when it comes to us or AGI because efficient intelligence is really just the effective use of attention. Everything culminates in guiding where you place your attention. All the structures you create, all the network connections you change all of it is so you can allocate your attention to the correct areas of the environment given your goal, and your future goals and the way your environment might mutate.


Yeah I’m on this too. At a closer look however, what really ignites curiosity are the “WTF?” moments. A magician pulling a rabbit from his hat. Long before that, my father detaching his finger and putting it back

As a general rule, a “wtf?” is something that we feel the urge to transform into a “aha!”,

The smarter the critter the more frequent are the “WTF?”-s .

Things that cannot fit into the current “model of everything”.

Yet I’m not sure this is because of a higher drive for “seeking the unknown” or because smarter brains have more complex/detailed models of reality which makes them more likely to encounter mismatching inputs
And there-s a reinforcing loop - every time we manage to turn a puzzle into a “aha!” we learn that seeking puzzles to solve might have fun consequences.

I don’t know… people at least seem to enjoy magic shows.

Attention’s focus (conscience?) seems to chase the biggest mismatch between what is expected to happen and what happens.
“expected” seems a variable formed by adding “wanted” with “predicted”.

Maybe a “general intelligence” is built around a generic sensing of “the unknown” and a generic method to push it outwards, a transformer of unfamiliarity into familiarity. Social interaction for juveniles is important because it draws a map of what can be done but “I” cannot do yet. “Whaat? how can they stand on two legs without falling?”


I see curiosity as a method of building a database of experience to allow the prediction to work.

We don’t calculate what will happen - we replay actual memories or generalize from similar memories as the basis of our predictions.

So putting everything in its right place: we arrange to experience something through play or exploration, then, in similar situations we can predict that the new situation will be about the same. Goals will be in the expected place and sequences will play out the same way. We salt these experiences with the emotion of how the made us feel so in out future predictions we know how best to engage the experience- run away or look for some reward.

@cezar_t mentioned magic - mostly based on messing with these expectations that things will work a certain way based on our prior experiences.


Thanks, @cezar_t! I think you’ve identified the Wtf? experience as one way that the relative priority for the Curiosity pseudo-drive of my model could increase, complementing its decrease signal, Aha!. “Now that I have some time free from primal drives - for example, after escaping from or dispatching a surprise survival threat - what about those Wtf?s I remember. I think I’ll go investigate the one that returns the greatest emotional charge when I remember it.”

Here’s a tiny, unimportant clarification to your phrasing:

We access “actual memories” but those memories describe subjectively experienced recollections which may or may not match what “actually” happened. All memories are emotionally charged (biased), and they are malleable, especially with repeated access.

Regardless of their absolute accuracy, though, I agree that these memories perform the role you outline.

Darn. I thought I had convinced you that curiosity is a motivation for the behavior that fills the database.

Tomato - tomato.
Filling the table enable prediction.
That is the utility of this drive.

Haha. I like tomatoes.

On expectation:

We each have an emotionally charged bank of subjective episodic memories that makes available particular memories that are (somehow) judged relevant to the current situation. The memories include behavior undertaken, remembered outcomes of former experiences, and, thus, what we expect to happen in such and similar circumstances, each aspect having emotional tags attached. Collectively the entire body of these memories defines a subject’s predictive model (“If X happens, Y & Z will result.”) of the world. This is everything a subject knows and believes, and is the basis for all of the subject’s actions.

A challenge to this model can be a challenge to the subject’s sense of identity. When it’s rabbit-out-of-hat level surprise, no big deal, but when horror-movie stuff happens, for example like former loved ones turning evil, then self-questioning, world assumption questioning is needed, and it hits hard emotionally. We try to first fit the surprise into the existing world model, maybe by repeating the experience if possible, but if the surprising result endures, we modify our view of the world. Depending on the magnitude of the required modification, we might feel “This is all so sudden, so overwhelming” for a while.

I believe dreaming is the process that integrates recent short-term memories into the existing body of memories in a way that allows the most valuable memories - evaluated according to emotional tags of course - to be most accessible - ie, most expected - when it comes time to guide future behavior. Some support for this idea can be seen in how often dream content is reported to include fear, and from the fact that the amygdala is active during REM.

I believe as I was taught that every expectation represents an attachment of ego to an outcome, so every surprise (Wtf?) is a wound to the ego. I think it’s possible to infer this rule from the world-view-from-dream-balanced-emotional-memories-as-self-identity scheme that I described, but I’m not pressing this stronger assertion at this time.

Presumably, an AGI based on human-like emotional experience and memories would need some similar scheme for integrating new experiences and emotions as they occur.

Memory and sleeping?


1 Like

As far as that “surprise” thing - that is the very basis of learning in a predictive system!

You predict (see prior posts on how that comes about) in when the prediction fails learning is triggered. You don’t learn anything if it is the same old thing.

1 Like

What about Belief revision