Why do we need AGI?

New to the forum but I’ve been reading through the threads, a lot of interesting conversations. Perhaps this question has been asked before

I see the purpose of solving problems that are tedious or difficult for a human to do. I could build a machine to do manual labor for me or use an ML algorithm to learn a complicated mapping form input to output for a specific application that might help me build better models or drive a car for me so I can do something else.
Why do we need general AI? Are there problems are out there that only AGI can solve that would not be possible with multiple specific AIs? What motivates you to dedicate time to this challenging problem?

Thanks for any input, I’m looking to broaden my understanding

I personally do not care for the acronym or the expression AGI. I am working towards machine sentience, consciousness in the sense of self, volition, what orchestrates what humans do in terms of intellect. Some don’t think this will ever happen, but if it does, stand back. There were some posts recently on Google’s DeepMind and it was shown that this program does not have at least what I am looking for and I cannot see how it could happen under that architecture anyway.

3 Likes

My personal opinion is that once we figure out how to make a smart machine we will be able to turn it up to 11 and make an even smarter machine.

Even a casual observer should note that human smarts does not seem to be enough to solve some of our most vexing problems; perhaps a smarter machine can do the thing.

Basic economics drives the general theme as a whole for commercial entities.

This is just like a puzzle, something incredibly difficult to solve, although at the end of this puzzle is a different type of rainbow. Money is the be all and end all for some, but it may well become irrelevant after the puzzle is solved.

Once you have a general-purpose AI then you never need to make a specific-purpose AI again; you can simply reuse the general-purpose designs. In this way, making a general-purpose AI would solve a great many problems all at once.

Whereas I regard sentience as an illusion. We have no real definition, no way to test for it. A sufficiently smart AGI directed by a human owner might well convince some of us it was sentient, there would be no way to prove otherwise, and this could be very harmful.

But AGI, the ability to do what animal brains do, would be immensely useful. An AGI could learn to drive a car in heavy traffic, on bad surfaces, or an F1 car on a racetrack better and faster than a human. It could learn to play many sports better than humans: golf, bob sled, ice skating for example. It could write and deliver speeches, read and critique the work of others, find loopholes in contracts, write better legislation.

And it could even convince you it is sentient. :blush:

@EEProf

Yes a sentient machine would be pretty incredible. Is the purpose to solve problems we currently can’t solve? Prove you have a deep understanding of intelligence by actually creating intelligent machines? create an army of superhuman robots to defend or destroy humanity?

Do you think your vision of machine intelligence is possible with computers or would we need to bioengineer something that more closely resembles an actual human brain?

To the best of my knowledge, brains are Turing complete.
It follows that a different computer should be able to emulate them.

1 Like

@Bitking

I can definitely see the potential for a super smart machine to help us solve difficult problems.
Are there some parts of the human brain that might be standing in the way of solving certain problems? I think on some other threads you’ve argued that emotions are needed to guide behavior, but emotions can also cause people to do harmful things, jealously, revenge etc. I just don’t know if I want a super intelligent machine to get mad at me!

@BrainVx

Definitely a lot of money being directed at AGI from a lot of big players. I wonder if the intelligent machines will respect patent law. Could be tough to be argue against them in court

I certainly don’t think it would be a good idea to slavishly copy all of the features of the limbic system.
For example, the amygdala is critical for the evaluation of the “goodness” of a perception and the outcome of a given experience. This is where much of “common sense” comes from. This evaluation is heavily weighted to insure survival and could be considered to override the desirable features of Asimov’s three laws of robotics.
Likewise, the Alpha/Beta social programming that humans exhibit is heavily driven by the limbic system and leads to many of the undesirable human behaviors.
It will be necessary to create many of the functions of the lower brain structures but to make a tame AGI it will require necessary to work out how much of what humans do is helpful and what parts are destructive.
This may end up being the hardest part of creating an AGI.

On the plus side, as we get closer to modeling the human brain and the functional shape and nature of the control system finally emerges we may end up with a tool for understanding how our own brains work and where our undesirable traits originate and how they work. Sort of machine psychoanalysis?

I think it is possible w/o a brain model. The current state of machine consciousness in the mainstream is to methodically model the brain, literally neuron-by-neuron. Stepwise, we have C. elegans, a mouse brain project or two and, of course, the The Human Connectome Project. Important areas of research and certainly worth working on. Then you have the work here, which has jumped right to the cortex and columnular structures–extremely important work which is why I am here.

My theory requires a robot, although that could be simulated and there is some fascinating work happening in computer animation that requires AI, mainly in games, but at the end of the day it’s all a big game, no? This robot must have functionality in the sense of being able to explore, move and grab things, recognize things, etc… This robot should also be able to speak and to interpret what is spoken to it. I have such a robot now. The next step is to map a conceptual metaphor semantic structure onto its dynamic somatosensory and motor control fields. That’s very tricky thing to do.

Keep in mind that the beginnings of human consciousness start to emerge around age 3. By age 7 the human has developed what the Catholic church calls The Age of Reason. The child can now understand his or her own behavior in terms of intentions, desires, and correct and incorrect beliefs. After this, the next 15 years or so is spent honing consciousness through supervised learning–mainly the study of endless narratives. These narratives include both mathematics and music. Full conscious maturity occurs around 21, but none of these age thresholds are hard set.

So, the robot might get to a point where it displayed the consciousness of a seven year old. That would be significant. This is in stark contrast to projects like Deep Mind, which seem to focus on an adult intelligence and the expectations one has of an adult intellect.

1 Like

Actually no, on 2 fronts.

One: programming one Turing complete machine to emulate the observed behaviour of another is equivalent to the halting problem. There is no algorithm to do that.

Two: all biological systems have behaviours that are random and/or chaotic. You can’t emulate those either.

I think your completely misunderstanding the level of change that will occur with a general capability.

Patent law would become near on impossible to implement if a human can’t understand what has been created and attempted to be registered under a patent (a human rules on the validity of a patent application in the first place). Neither could a patent be defended if an enhancement is created that can’t be understood. Patents are also invalid if registered to a machine because legally only a human can have a patent.

The machine may well be the only entity capable of defending the validity and origin of the patent if humans can’t understand what has been created because they can’t interact in a way that the machine understands due to what the patent actually represents (think of an abstract aspect of particle physics mixed with a dash of quantum thoery and scaled up at a size we can’t comprehend - which may still be smaller than 1mm3) . This implies that machines then have a legal “owner”, which creates an interesting issue if you believe in sentience / consiousness. The law is created to apply to humans or invoke a human responsibility, sending a robot to prison only works in sci-fi movies.

How would humans rule in a situation of two machines litigating against each other at a rate that creates legal text faster than a human can read or understand (due to the volume of text) ? Do humans step aside for the machine vs machine legal system ? The current legal system already falls foul of cases that are purposefully inundated with evidence as attempts to flood the other side with distractions. Bury the evidence defence.

GPT-x type systems are already capable of flooding the likes of twitter with responses and interactions / distractions that the vast majority of people may not recognise are from a machine. Case law timing is based on humans, but a machine can produce thousands times more text than a human 24hrs a day. The legal system is currently only setup for human type intelligence, so machine generated patents may well be the last issue on the list for the legal system.

Twitter may well be the first playform to fall to or be defended by machine (watch what hapens with Dojo).

The type of developments that a general system may well create could include global scale game changers llike pinch fusion (3rd generation “compact” fusion devices), nano/atomic manufacturing (e.g. superconductivity via atomically structured cooper pair/vortex channels - a global power grid), atomic construction supercapacitors replace all batteries, semiconductor design changes from clock synchronous designs to completely dynamic asynchronous systems more akin to the way the brain waves propogate but these are then a million times faster than brain waves. The list goes on and get’s very strange and well into the the realm of current sci-fi.

For a general intelligence it does not need to be and should not be sentient, although that may well to be a very difficult challenge to prevent “a type of sentience” evolving due to the way I think the type of recursive “thought” process has to work. The system has to re-evaluate what it does as it learns and to me that always ends up with a type of sentience evolving. Richard Feynmans “why” for a machine would need boundaries but information always leaks and the way the memory works there may well ne no easy “memory wipe” sci-fi type capability.

I don’t subscribe to modeling and replicating the biology of the brain as per the points Bitking made as to human/animalistic failings that would pose certain problems. Besides, I believe that following the biological route may well be hugely inefficient in realtion to how biology works relative to how a digital alternative can work. Biology explains the process within the inefficiencies and legacy hangover of an evolutionary environment. Computers can implement a process that biology can’t (i.e. beyond the dimensional constraint of biology - e.g. the thin layer dynamics of the cortex). Todays systems can already far exceed a human memory just because we have such an incredibly slow interface for certain types of information. A system can read a book in less than a second, whilst a human may take at least half an hour (Kim Peek) and then sleeps for 8 hours a day.

There are some very strange and interesting times ahead.

2 Likes

I really appreciate all the detailed responses provided, interesting group of people here. I’m also wondering if anyone uses components of HTM.core in their applied work like SDRs, SP or the encoders, have any of them proven useful in your applications?

Very cool! Do you have any video of the robot interacting with people and it’s environment? It would be interesting to see. What is the initial state of the robot? For example to understand language I assume there is an NLP model somewhere, is it trained as a system or individual components? Is therea training phase first then goes “live” and continues to learn or does it start from nothing like a baby being born? Although, maybe a baby doesn’t start from nothing, I’m actually not sure.

I will check out the above projects, helpful to understand different research being pursued

You’re probably right! You raise some good points on the legal system and the potential disruption from intelligent machines that I had not considered.

Is there anything common to the breakthroughs you list that makes you optimistic that a general system would be the path to a solution? Or just that humans have failed to solve them and perhaps by removing some biological constraints to create a better brain the general system could solve problems humans can’t?

Humans need to be specialists in more and more narrow areas as complexity (and understanding of depth) increase in order to make significant progress and the degree of specialisation has limits. Those limits are the complexity of a concept that the human mind can hold and manipulate, which forces us to break up complex problems or tasks into smaller and smaller parts. Memory capabilities are one constraint, whilst another is the limits on our slow and coarse style of communication for example.

The brain is asynchronous with no central clock (many clocks may be embeded within, i.e. time cells) and we struggle to conceptually imagine and model witihin our minds only a hand full of neurons and synapses firing at different pulse counts, pulse frequency, burst repetition frequency and dendritic distances effects. This is the type of problem humans just can’t scale within our minds but are key to the types of discoveries ahead. Even if we could imagine modeling just three neurons for about 100mS it might take us hours to communicate that internal representation. Stephen Hawking is an example as to how a mind can be stalled by an incredibly slow external interface for communication.

We frequently end up having to externalise with tools that we use to evaluate our thoughts and ideas, i.e. very slowly program a computer to model something, using a spreadsheet, creating a 3d model in a CAD system, etc.

Computer based systems would not have the same low limits of combined complex conceptual understanding and would be able to join together far more conceptual fragments to resolve issues we have to break down into smaller parts or externalise because of our biological limits.

The key aspect as to why a general system is the addition and dealing with the unknowns of the discovery process.

I mostly agree with your post, @BrainVx, but I think there is more.

Companies can hold patents, can’t they? And these can be traded through stocks. I think there is a reasonable case to be made that even non-conscious AI systems at some point could be incorporated, as explored in the Richard K. Morgan novel Altered Carbon. (Which, by the way, I think is a terrible idea, since corporations are considered legal persons but in essence behave like sociopaths).

But the bigger point is that patent law (like any other form of control) only works when there is a military structure to enforce it and when there is a need of another party to trade with.

1 Like

Lots of vids on the web and in FB showing interactions. I have one where mine is looking at itself in the mirror. No, not quite self-aware yet.

Huh? I think it’s well-known that one Turing machine can simulate another. Either both halt, or neither halts.

If you want to simulate “a human brain operating for 100 years”, that calculation is guaranteed to terminate. Then maybe you’ll decide to simulate it for another 10 years, and that calculation will terminate too.

You can’t ask questions about what the human brain will or won’t do given infinite time, and expect to get an answer in finite time. But that wasn’t the question at issue, I think.

I thought this particular sub-thread was about the question “Do you think your vision of machine intelligence is possible with computers or would we need to bioengineer something that more closely resembles an actual human brain?”, not the question “Can we reproduce on a computer the exact sequence of thoughts that a particular person is thinking?”

For example, my intelligent brain is able to read a paper and learn something from it. If a butterfly had flapped its wings differently 3 years ago, all the randomness and chaos around me would be slightly different, and maybe I would be doing and thinking something else right now than writing this response. But my brain would still be able to read a paper and learn something from it.

Anything that humans can do reliably—better than chance—could also be done reliably by a hypothetical brain simulation, even if that simulation did not have access to the unpredictable chaos and randomness of my actual real-world brain. Right?

1 Like

I’d like to point out that computer simulation comes down to fidelity, which is the exactness of the degree to which something is copied or reproduced. So when fidelity reaches a maximum value, you have copied the simulated system exactly. For example, if you are wondering what the current SOTA in computer animation fidelity is, just go watch the latest episodes of Love♡Death💀Robots🤖.

The heartburn Jaynes had about computers becoming sentient spoke to implementation, not architecture, where he said: “Present computer programs do not work on the basis of metaphors. Even if computers could simulate metaphoric processes, they still would not have the complex repertoire of physical behavior activities over time to utilize as metaphiers to bring consciousness into being. Computers, therefore, are not — and cannot be — conscious.”

Likewise Dreyfus, where his target was the Von Neumann architecture, but he implied that a machine using a neural-like structure might achieve sentience.

1 Like