The principles of intelligence and cognition: John Vervaeke lays down the philosophical compass for defining AGI

If identifying the innate properties of universal forms of intelligence is a prerequisite to attain it, then this talk is one we should pay attention to. John Versaeke has a phenomenal ability to explain the principal criteria we should be aware about in this field. One thing becomes very clear: Prediction alone is a very far cry from the AGI goal. But John nevertheless agrees that the goal is attainable.

2 Likes

In my opinion, Vervaeke points out very well, some of the properties we should expect to find in sentient AGI. He provides a very enriching perspective and interpretation of current developments surrounding LLMs as well as several other AI experiments. He provides both a well-founded scientific analysis and a philosophical analysis of these developments.

After around 40 years of reading every seminal publication I can get my hands on, surrounding the topic of intelligence, consciousness, awareness and neuroscience as well as computational modelling of these emergent phenomena and 11 years in this forum, I find it increasingly difficult to listen to some newbies that quickly jump into the over-hyped AI / ML train and purport to be experts in the subject. I have now discovered a new class of “uncanny valley”.

This is the new “uncanny valley” felt by those more experienced AI thinkers (including neuroscientists and cognitive scientists) that have invested plenty of time pondering about this complex subject, when they listen to the superficial treatment of this subject by presenters with very little experience. I seem to be encountering these “uncanny moments” with increasing frequency at work and around friends. This leads me to conclude, that our society is really not ready to understand these developments. That worries me. Many people are inclined to oversee many aspects of this emergent technology and its implications. This leads to exaggerated fears and unfounded doomsday predictions. (I would never criticize a well-founded analysis with negative conclusions). However, we are seeing the classical fear-driven, uninformed reactions of a society that has not prepared itself to comprehend the capabilities, limitations and potential of these technologies.

2 Likes

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” - Max Planck

“The greatest obstacle to discovery is not ignorance; it is the illusion of knowledge.” - Daniel J. Boorstin

If you think everyone around you is misled, then perhaps you should stop to consider that perhaps you might be the ignorant one.

Studying a subject for longer doesn’t imply more wisdom, it implies only more knowledge. You can memorize every mathematical theoreom out there, but its all useless if you can’t appreciate the subtle, sublime intricacies of logic.

I would recommend taking a more rationalist approach in determining your beliefs and adjusting them to new evidence, rather than going on a tangent about “kids these days…”

Could you argue that the vast majority who write Medium shitposts and frenzied, crazy, mind-blowing AI insights :exploding_head: all arise from pure hype, speculation and straight-up misinformation? sure.

But I’m more worried by an attitude of dismissal towards everything than AI research becoming the next crypto craze. Research is still bound to academic institutions, so I feel we’re safe - for now atleast - that academics would be relatively unaffected as long as their work continues to bear fruits. It could even bring fresh eyes and funding to the field.

I think this knee-jerk reaction is warranted to a degree, but I would remind you that western media operates on hype-cycles, and AI is just a new one. Eventually, they’ll find something else to talk about and gather more eyeballs. But I doubt AI as a topic of discussion would be fading soon…

2 Likes

There are plenty of people who feel the same. In the media hype you mention, half of it is about “wow AI is here” but for such strong hype, you need arguments, so the other half is about “wow AI sucks”. Some of the latter side is just for fun, because AI can be stupid in unexpected funny ways. Some is about the actual technical stuff.

I think the dismissal is more towards AI being intelligent. If you think of it as general intelligence, it’s very scary some of the things it can do, on the AI safety side of things. On the make-a-functioning-AI side of things, some of the things AI can do are very exciting.

I haven’t watched that video. I refuse to, and youtube constantly recommends it. I think videos like that are making people afraid of the wrong thing. Paperclip maximizers won’t happen in the near future. (Paperclip maximizers are a basic example of the challenge of aligning AI with our goals / values. A godlike AI whose goal is to make paper clips turns everything into paperclips). Language models are sort of about mimicry (not word-for-word), and they’re the most successful AI right now.

Some AIs are far better than humans at the game Go. But AI researchers found weaknesses and a novice Go player was able to beat a superhuman Go AI. In Go, you need to surround your opponent’s pieces. If you do a double-sandwich technique, where the AI surrounds your pieces but you surround those, the AI doesn’t understand what’s going wrong.

So the best Go players are beaten by Go AIs which are beaten by novice Go players. So the conclusion this video makes is we really don’t understand how AI works (because it’s a black box; I’m sure we understand the principles of AI learning, just not what it learns). It’s alien to us, and deeply integrating it into society could be dangerous. Failures wouldn’t have a known source.

One thing he talks about in this is how large language models still have trouble with truthfulness. They say things which a person might say, but they don’t understand concepts, so there’s fundamentally a problem with truthfulness which larger models won’t solve. That makes them unsuited for advancing science and technology, but they’re still really strong for misinformation campaigns.

Of course they still have positive uses, just not the goal of advancing science and technology. In his opinion, and mine too, that’s the most important thing AI could do.

3 Likes

What I’m saying is that its every hype cycle - crypto, AI - it doesn’t make a difference. That’s just how western media operates because it effectively maximizes eyeballs.

Its true to an extent, I side with you here - but you have to also recognize that Bostrom’s (and Yudhowsky’s) entire position here is that you don’t necessarily need a super smart misaligned AI here to wreak havoc, but rather a sufficiently powerful goal-directed optimizer.

LMs really excited the community because despite their failings, they’re excellent optimizer. Think about it from a meta perspective for a second - you’re asking a model to consume numbers and predict what comes next with absolutely no grounding, yet its able to understand, analyze and perform complicated tasks through that proxy.

Not every model or system with that task would have those capabilities. Its only because NNs are such powerful optimizers that they make everything else look like a toy. If you watch a lot of YouTube, you’d notice how effectively the algorithm works - how it provides you the best content later on to keep you hanging. Its effectiveness is insane - you could spend hours everyday and not get bored.

So back to paperclips, if LMs really are such good optimizers, why not get it to use the numbers and instead of predicting them, tweak them such that some reward function somewhere measuring paperclips is maximized? how well do you think that would work? :slight_smile: Given enough iterations, and maybe bootstrapped from text, don’t you think the LM would be fundamentally deceptive?

Yep, there are some failings of the system but I would remind you, that a lot of them are adversarially designed. Another optimizer NN had the sole task of breaking the Go model. The flying dagger position for instance was inspired by adversarial attacks on KataGo.

If you had access to the human brain, its even easier to trick it - as papers have found, NNs are already more robust to vision adversarial attacks than humans. A few minutes of photoshop is enough for that. So comparing apples to oranges here could be very misleading.

Gary Marcus has had historically, some uh… very ‘strong’ opinions on Deep Learning and is a bit of a joke and a charlatan to an extent.

I don’t mind a bit of debate, but Marcus’ claims are so absurd that almost everyone dismisses his outlandish (and somewhat polarizing) statements.
I think some of them have some merit, but the problem is he doesn’t really present the arguments and scientific evidence to back them up. Tweets are hardly scientifically rigorous.

Take truthfulness as an example, in the recent TruthfulQA benchmark, turns out it was adversarially designed against GPT3 and other LLMs. A model pretrained on 4Chan outperformed everything by a huge margin.
Testing such abstract, subjective ideas would always lead to leaky and bad benchmarks - where a bit of prompting could easily outperform SOTA by miles simply because they rely on deceiving the model more than anything else.

Time and time again, it’s been proved that LMs do learn abstract concepts. If you’re talking about hallucinations rather than truthfulness, that’s an alignment problem - the model has literally 0 incentive to be grounded and not hallucinate. Which was why RLHF was such a big deal in alignment. It helped quite a bit - not perfectly by any means - but it goes to show that larger models and alignment, maybe even multimodal models are needed for actual true grounding.

Right now, yes. They can advance science to some degree - specialized models like AlphaFold are hailed as breakthroughs and they work - they predicted Covid-19’s spike’s mechanism more than half a year ago than conventional labs verified.

But LMs alone can’t do that. So what’s the solution? recursive self-improvement. If we do reach AGI stage, then the model would need to self-improve and be aligned towards the goal of solving scientific problems. Only then would you advance science. and because its a LM, abstract goals like “exploring new concepts” would work wonderfully well here.

(If it would be embodied, I suppose that’s further help. But I see no reason why it can’t solve problems and provide experiments for verification to be done by someone IRL)

4 Likes

Basically, in the sense that it’d act in ways we didn’t expect or want. I don’t think it’d understand what people are thinking though.

How would you disprove that? If a language model makes a mistake, humans make mistakes too, and it might just not be powerful enough.

With language models, it also seems impossible to prove, because it might just be recognizing patterns in words. Perhaps it’s a matter of semantics. Would you consider a sufficiently complex / general pattern in words to be an abstract concept? What if that pattern were associated with e.g. visual information, so it’s like when people see a bird and think “bird”?

In many cases, that’s as good as abstract concepts, but it seems to me like it works fundamentally differently.

The types of mistakes language models make seem quite different from the types of mistakes humans make. That seems to be because of how they work, not their inputs / experiences, scale, etc.

3 Likes

Isn’t that what some research paper are ?

3 Likes

If its a very good optimizer, and understanding (anticipating) what people think and behave towards it is crucial to the goal, then there is no reason why it wouldn’t optimize towards some specific behaviors precisely to understand and exploit people better.

Mechanistic Interpretability here provides the bulk of answers. Its been demonstrated how even next-word-predictors can learn complex game state of Othello (a board game) despite being supplied only some moves performed by othello players encoded as numbers.

Algorithmically too, they learn novel algorithms which are yet undiscovered and confusing; such as Neural GPU learning a linear algorithm for binary multiplication (for which there is no known algorithm yet) or vanilla NNs even learning to utilize the discrete fourier transform and trig identities to do arithmetic, which is quite insane.

For us, we can’t reliably prove LMs learn abstract concepts except just testing it on abstract tasks. Really, the moment you have a way to judge humans, then you have a way to disambiguate LMs.

I feel that given the evidence that they can learn complex structured representations and algorithms, outweighs the evidence against LMs learning abstract concepts. So for now, my beliefs are updated towards that direction.

Not really; its surprising that it took people so long to figure out how LMs incorrectly answer crucial tasks. The common assumption was they don’t have commonsense - which is true to an extent, it comes with scale - but apparently, the problem was that LLMs took assumptions that were unexpected by humans due to the ambiguity in language.

You can often ask it for the assumptions, and clarify what it thinks. Eventually, if it makes the error, it would be reasoning ones in a long CoT or arithmetic ones. Which are very human errors too if you think about it.

Perhaps we may be on different pages and maybe thinking of different mistakes they make, but the ones that immediately pop in my mind are very human. That’s not a gold standard, but atleast its a nice reference point.

3 Likes