Google DeepMind claims they're close to achieving human-level AI

Dr. Nando de Freitas, a lead researcher at Google’s DeepMind, has declared “the game is over” when it comes to achieving artificial general intelligence (AGI), or human-level intelligence.

Do we believe this? I don’t.

1 Like

I don’t believe it either. The claim is made but nothing was offered to substantiate it. Existing accomplishments by DeepMind are amazing but none of them - zero - are associated with understanding or any deeper manipulation of the data that could be expected to indicate AGI.

My favorite example of how very far we are from AGI is in how easy it is to “break” AI interpretation of pictures, here’s an example for medical imaging -AI techniques in medical imaging may lead to incorrect diagnoses -- ScienceDaily
but this is an across-the-board phenomenoon. And why? Because AI can catagorize but it does not have any deeper understanding, and it is not poised to.

Can it? I think yes, but not using current methods and, IMHO, definitely not with using neural networks.

Once again I wish we had the ‘ha ha’ button here.

2 Likes

From the paper…
image
image
image

That’s the sort of AGI I would really trust… and if that’s the definition of “all but here” I also have a fully capable AGI dice with the same type of success rate.

Source : https://arxiv.org/pdf/2205.06175
[2205.06175] A Generalist Agent

I find it amusing that people accuse scientists of cherry-picking while doing the exact same thing themselves @BrainVx. What you’re showing is the top-k results, not the most ‘correct’ given by the model with greedy sampling. These are the true annotations:

I definitely agree with Dr. Freitas, AGI is coming. The catch is simply not yet :wink: other experts in the field have estimated 10-15 years which is a much more reasonable timeline IMO.

On the topic of results, I feel that people outside the field often only see a small sliver of the whole field, attributing all advances by comparing to vanilla DNNs despite a huge lack of scientific evidence. For instance, multiple forum members here have stated that models can’t even logically reason and deduce complex concepts.

Large models are consistently able to do that and more. Let’s not forget that the same model can actually do hundreds of tasks which IMO is the closest to actual intelligence than pseudo-GOFAI research and the lack of results presented by other fields claiming to be near-AGI already :man_shrugging:

2 Likes

It’s the type of error that I see as a problem, because it shows a lack of self verification that what anser is being projected is recursively accurate. The model should take the answer, then create an image from that answer and then compare the two images to see if they make sense. Afterall that seems to be the process we do in our heads.

I would argue that say “cherry picking” a self driving car accident caused very clearly by the model, which may have killed someone, is not really cherry picking. It either works or it’s critically flawed and some of this approach seems like flogging a dead horse, no offence to horses.

image

Are these types of errors not concerning if this model is thought of as close to AGI ?

AGI is deffinately on the horrizon, but not with the current approach because it’s superficially adding depth to a model, that lacks a shallower recursive process, which is akin to our perception of thought. Biologically we are a recursive model that has an equivalent “deep learning” depth of about 30-60 and this is indicated based on synaptic timing of thought. Deep learning I think of as the cerebellum, just not as deep.

Humans also present multiple answers to the same solution; you can take multiple trajectories when driving - things are never black or white. The paper shows top-k because that’s standard, not because the model is confused - it has a single correct answers but the paper interrogates it for multiple if that simplification makes sense.

Not really - the model was trained on video games. The authors were showing off that despite those blows, it can still learn english and converse as well as understand things outside of training to vague degree. Currently, multi-modal models like Flamingo are where you should be comparing such tasks as captioning and conversation.


1 Like

Do the mechanisms do reasoning, or does it just look like that to an outside observer? From my perspective and little knowledge, it’s just classification (or similar) with some superhuman characteristics. Which is a very powerful thing, but it’s something besides intelligence.

That’s highly subjective question - and its hard to interpret. Usually, researchers either quantify reasoning abilities with various benchmarks (which PaLM outperforms on) or they study capabilities, like the ability to explain jokes or logically deduce like I linked above.

You’ll never be able to see whether models truly “reason” until we crack interpretability (which I personally doubt we can in a reasonable time frame). The best we can do in that case is study it until its capabilities are equal to a human, assuming human level intelligence to be a relative standard to reasoning.

Shades of the “Chinese room” debate!
Do you know what this “reasoning” looks like - can you describe it to the detail necessary to implement it?
How do you think this differs from what is being described in this thread?

Just a little sorbet to clear the palate of this thread…

Artificial intelligence is breaking patent law

It is astounding easy to get most overly broad patents set aside.
Any claims of patenting “artificial intelligence” certainly fits in this category.

1 Like

@david.pfx You might find this article more helpful:

3 Likes

In terms of chasing AGI, some useful articles worth a look IMHO:

1 Like

That is an interesting article. The statement

This is what Gato does: it learns multiple different tasks at the same time, which means it can switch between them without having to forget one skill before learning another. It’s a small advance but a significant one.

That is hardly a small advance. This might enable Dennett’s Multiple Drafts or even Thousand Brains. Note that it was de Freitas’s Tweet that triggered breathless press coverage. Just stupid and irresponsible.

1 Like

It’s a small advance in the sense that Google has produced far more capable models, GATO isn’t really a new architecture - just the old transformer applied onto RL trajectories.

The advance here is that you can have a model that models everything from understanding images, to comprehending language as well as playing 150+ games and handling a real-world robot arm task. They demonstrated that no matter how many tasks, no matter how diverse, LLMs still learn and are able to transfer concepts to and fro from almost any domain, at the end of the day.

It’s basically a testament to their learning prowess outside of language, vision and speech too :wink:

But what really strikes is that this is all imitation learning. It’s simply trying to reproduce data created by humans and in a weird manner, its converging (somehow) on intellectual behaviour like humans to solve those tasks to the best of its ability. It’s really that weird, counterintuitive concept whereby learning to parrot actually ends up teaching it concepts and reasoning too… :thinking:

Any comments from the Chinese room full of parrots ?

Thanks, I’m reassured.

Just brings us back to the question: what would an AGI have to do to convince us it is one? How do we define intelligence in a way we can scientifically test for it?

I read Penrose but I really never accepted the Chinese Room argument.

What test can you apply to a Chinese Room (other than opening it up and looking inside) to determine whether it is intelligent or not?

The Chinese Room argument fails at the assumption that “out there” is some objective entity (e.g. Chinese language) that should be comprehended by “the room”.

There is nothing in the room but its self built model. Whether it really understands or miss-understands whatever exists “out” is subject to judgement (aka interpretation/processing) within other rooms (the “correct” ones) according to their respective self-built inner models.