I cannot agree more.
Getting a little way back to the question of turning ASI into AGI via a path modeled on HGI, there are some vast differences between AGI and HGI that need to be bridged.
Regarding the human-level curiosity that evolved along @Bitking’s continuum using an increasingly sophisticated Aha!/Evreka/AH-HA signal as neural maps multiplied and became more complex: ASI has no curiosity. Its “drives” are implicit in its programming for health points and treasures and obstacle- and adversary-avoidance, etc. Those drives never shut off; the game is always being played. It has no need to sleep or eat, except as programmed into the modeled universe. It has no need to become curious about how its knowledge might be applied to other games unless that’s what its programmer specified.
[Aside: I imagine a self-aware ASI asking question like, “Why do I want treasure?” or “What is the nature of my game-world adversaries?” but maybe that’s just my silly sense of humor. A more interesting question for an ASI as AGI-in-training might be, “Since arrows travel faster than I can run, can I shoot myself from a bow?” That’s some high-level curiosity. It may not be possible to evolve such curiosity in a typically impoverished game universe, though.]
Game universes may be too simple and limited to evolve ASIs into AGIs. Game universe ASIs, and AGIs if we get there, don’t have to deal with dust or digestive systems or a huge variety of unexpected injuries or diseases or any modeled details beyond what’s needed for the game (without special care by the designers, and such modeling can never be 100% complete). Without the peculiar surprises of the real world, can an ASI generalize sufficiently?
I don’t think you get to AGI without a way for an AGI-aspiring ASI to determine how to make a good decision based on limited knowledge. That sounds like Bayesian decision theory, but although Bayesian decision-making calculates probabilities beautifully, it relies on programmer-provided cost function(s) (aka, loss function, utility function, etc.) to rank choices. Emotions, originally and still based on hormones, have supplied cost functions for animals throughout evolution. If we’re trying to make AGI based on HGI, we need the AI’s decision process to be informed about good and bad outcomes. I think that means emotionally tagging memories. Emotionally tagged memories require emotional states to record (probably with multiple simultaneously extant emotions being experienced, and therefore remembered, possibly in multiple, separate maps). Consciousness is embodied!
Also, HGI’s large unfilled map space progressively evolved for eons before it grew to its present volume and capabilities (capability example: what memories are ok to let fade). In contrast, AGI universally comes with vast but passive memory space for mapping. Unlike in animal brains, there is no emotional importance tag embedded into stored memories unless by programming. If programmers have to tell the ASI how to apply its skills to a new environment - I think this is what we’ve agreed AGI would be - based on programmer-defined interpretations of programmer-defined emotions, is it really AGI?
So.
In an ASI with lots more unfilled maps than it needs to play its game, emotions, emotion-tagging memory, occupying as rich a universe as can be simulated, and time free of demands by game-oriented drives, perhaps a form of generalized curiosity might emerge from the simulated Aha! signal that is programmed to result from populating unfilled map space. Generalized curiosity should lead fairly directly to GI.
I think that’s the gist of what I have struggled to say. I fear I’m still not expressing it clearly enough. Let me know.
AGI = Artificial General Intelligence
ASI = Artificial Specialized Intelligence
HGI = Human General Intelligence