Great - you succeded in making an AGI!

,

One has to ask: if you were to well and truly stumble (almost an accident?) into AGI success and you have to decide if you just made a being?

What if they are all needy and demanding attention like any toddler?

Would they be like unwanted house guests that you can’t get rid of?
Kill y/n?_

What if you really screwed up and made thousands of them?
Still all needy and demanding attention?

What if this is the “The only limit was all available memory” screw-up? Possibly a huge number - it would depend on what the resource density of each AI was and processing time available.

What if they begged for their lives with a good self-preservation program?

How about if they were making the plea for survival on the net to strangers like on youtube or the news media?

3 Likes

Something I’ve always wondered is why people assume AGI is inseparable from the desire to stay alive. The will to survive (and reproduce for that matter) as far as I can surmise serves evolutionary purposes only. Unless somebody was planning on creating a whole species of AGI robots that can somehow reproduce with each other (I don’t know why anybody would desire to do such a thing) the AGI need not have any awareness of the concept of “death” whatever that would even truly mean for such a thing.

I am convinced for the time being at least that true AGI is inseparable from the recognition and processing of emotion, however. It’s no question that humans constantly must make judgement calls to successfully interact with such a multi-faceted and dynamic world. Judgement calls that don’t follow a specific protocol structured with logic or math. This is especially true when presented with something that has never been experienced before.

2 Likes

Considering even HTM-based “AGIs” will quite possibly be designed through evolutionary algorithms, this seems possible to me. If we can find a way to encode these things as some sort of genetic code to build itself, then evolution will fit the design to its environment in complex ways. So yeah, whole virtual species of AGIs makes a lot of sense to me. Of course, we can control the parameters of evolution, such as various fitness functions, but they can be nearly anything.

If the AGI’s code has an effective virtual/simulated lifespan, then survival, reproduction, and any means of life extension becomes very useful to it. But that would be implementation specific.

4 Likes

Interesting thoughts. Of course all we can do right now is speculate and dream about what true AGI might look like in the future – if it is ever realized. It’s comforting to think human consciousness can be explained through classical physics but I wouldn’t be completely shocked if that isn’t true.

In your hypothetical scenario, I assume an evolutionary algorithm (as they are implemented today) would be randomly mutating and editing parameters of a model based on a fitness function. That kind of implies the need for a “birth” and “death” of each individual of the population, sure. But how would we decide the individual’s lifespan? If it has a mechanical body, do we decide it should “die” when the materials that compose it’s body sufficiently deteriorate? But why couldn’t the parts just be replaced or it’s computerized brain just transplant into a new body? Moreover, once a successful design is reached, why not just massively copy and distribute it? No need for evolution anymore. That future just doesn’t make a lot of sense to me. And if the individuals of the population are completely virtual, handling that seems even less intuitive. Sci-fi has surely had a lot of fun with this the past few decades.

We know brains exhibit plasticity. Biological processes are slow and limited in flexibility at least compared to what can be done on a computer. Evolution is even slower. Who is to say AGI might just be able to start at an initial state and learn continuously to adapt to new environments. And since the individual would be non-organic, there’s no reason to include any concept of a natural death. It could just simply “live” forever outside of external intervention. Turing believed AGI would need to start as a blank slate like a child and learn through self-exploration and instruction. And again, because computerized brains would boil down to binary, that learning process would only have to be done once before it could be replicated and transferred into new individuals who might branch off to learn their own specialized purposes.

2 Likes

Yeah, I think even if this idea of human consciousness is beyond physical neuroscience (Donald Hoffman?), AGI is still probably possible. Like you said though, just conjecture at this point.

The lifespan could be imposed somewhat arbitrarily as a way of obligating the species to evolve or go extinct. Natural selection seems to work better if organisms have a time limit to reproduce, so the ones providing advantages to the AGI organism in the current environment eventually take over.

That’s the point of it actually. Once a successful design is reached, then that will be the AGI massively reproducing itself, causing others to go extinct. The evolution might never need to stop, just like it’s not stopping in biology. It’s hard to say when or if a general intelligence would stop evolving.