Why do members here think DL-based methods can't achieve AGI?

Compute is cheap as @BrainVx Quoted below, demonstrating what compute will develop over the next few years.
Data? We are on the internet. We literally haven’t scratched the surface yet. we haven’t even covered all of the text, let alone all the pictures, videos, audio etc. Every single modality counts.I am happy to report that there are multiple Big Tech, as well as startups/NPOs doing the data collection - like LAOIN.

is peanuts. Its nowhere near the manhattan project, which lead to significant advances.

That’s how science works - its not about making things cheap but about understanding and gaining knowledge.

You can look up - Again, I am not claiming GPT3 HAS achieved AGI in any form.

I am merely challenging you to locate a system more proficient in reasoning abilities than GPT3. You can search some for yourselves, these are what I found from google [To what extent is GPT-3 capable of reasoning? - LessWrong, https://towardsdatascience.com/is-gpt-3-reasonable-enough-to-detect-logical-fallacies-3c3dc4b7fda1]

Another thing - if you try out Codex or Co-Pilot you can see some examples of inferential understanding - particularly in writing out complex models. Its not everything, but it demonstrates something at least.

Universal Approximation Thereom.

Well, I can ask the same - Hawkins’s has been at this fore about 20 years, research in Biology/pschyology is much older than AGI research. But apparently, in the short term at least results seem to be only with DL :man_shrugging:

Very true, indeed. But pointing out to models like LamDa, Meenu etc. is difficult. For instance, MUM: A new AI milestone for understanding information is pretty interesting seeing its advanced multi-modal capabilities allows greater reasoning capabilities. Unfortunately, its behind closed doors so all we can do is wait :slight_smile:

I hope the above para answers your question. Its a simple logic,


If you can guarantee me that If I hook up a human brain with no senses or body apart from a brain in a contraption too keep it alive (with all the hormones), directly interfacing it with data. Given the same corpus Language models are given -

Can you guarantee me that the human developed thus would be conscious? Or better, would it be better than say an RNN?

AFAIK from my testing it can do up to 7-9 digit multiplication very well, it breaks down after a certain length. Why?

  • GPT3 can’t compute in variable timesteps, nor store data/ops internally. That requires a complex knowledge interchange which perhaps MoEs might achieve if the experts have connections facilitating direct flow
  • The fact that it can perform multiplication at all is itself a huge testimony that its not simply remembering/memorizing stuff but can actually figure out things on its own.

Lastly, due to architectural constraints performing deeply recurrent ops would be complex.
One thing I can’t grasp is why everyone seems to think GPT3 to be the only saving grace for entirely of DL :thinking: Like if it can’t do certain tasks, that’s representative of the entire research. Strange…

Scalability as a software attribute - surely HTM/TBT has enough funds to afford a few AMD threadrippers for a few days and try things out?

Its simply the problem with GOFAI/Biological approaches (bar some, maybe even TBT/HTM) they simply do not scale. Look at CYC, or LogicMOO. The problem is very clear here.

Lastly, Again I am not pointing out LMs to be AGI - simply asking if a more intelligent and Generalizable system exists.

Even if DL shows teeny-tiny reasoning capabilities, I would be more interested in finding its limits than achieving AGI - A grand experiment in deed :slightly_smiling_face:

Does this include or exclude the neurons in a petri dish playing pong ?

Math accuracy… GPT-3 will always give an anser, just depends on IF that answer is correct
“performing 3-digit arithmetic”

Q: Why don't animals have three legs?
A: Animals don't have three legs because they would fall over.

Q: Which is heavier, a toaster or a pencil?
A: A pencil is heavier than a toaster.

Q: How many eyes does my foot have?
A: Your foot has two eyes.

Q: When counting, what number comes before 10,000?
A: Nine thousand and ninety-nine comes before 10,000.

https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html
1 Like

The latest AI summer has been powered more by better compute than better software. GPT-3 is trained on a super computer bought with Microsoft $$$. The software improvements can come from looking at the brain.

GPT-3 more than scratched the surface, it bottomed out for years (if not longer). They couldn’t even ensure their data was clean and that it didn’t contain overlap with the test set. They found a big error in training and couldn’t even afford to restart the experiments… It saw more text than I’ll see in my life time. Kids take way less data. Let’s figure out how they do it.

The Manhattan project actually achieved something pragmatic. The most interesting thing I’ve seen from GPT-3 is codex, which is honestly “googling a stackoverflow answer” with more steps. You can’t use it long before it produces code that throws errors, which then requires a developer level understanding to fix. It will not change the way professional developers code.

We’ve already seen what the limits of GPT-X are. GPT-3 was the same architecture as GPT-2, with more data and compute. Like I keep saying, you’ll be holding your breath quite a while for both of those to scale up another 10x. It took a super computer and a nearly unmanageable, super-sized, dirty dataset. It is doing what the rest of DL does and interpolates + memorizes. Big data doesn’t just make reasoning magically appear.

Are you trolling? I ask for a citation and you tell me to look it up? I’ve followed this line of research for years, and debated others about this topic. The people who built GPT-3 are not making the same claims as you about its reasoning abilities.

Yeah, the human brain.

1 Like

Interestingly to see here - this is what I get:


If anyone can confirm they get the same thing, that would be nice! :hugs:

I tried it again just to check:


Interesting :thinking:

GPT3 wasn’t really trained on every single text in existence - and yes, the clearing issue was more on OpenAi’s side. An example would be EleutherAI’s The Pile while yes, a significant size difference exists - they still implemented stringent controls. And yes, I agree anyone with access to millions of dollars enough to train GPT3 should have used better data :man_shrugging:

US had several other projects too - like the Norden Bombsight, which didn’t turn out that well initially.

And in the end, yes the Manhattan project was big victory for science - but what stops you from assuming it would be the only one?

I simply can’t agree more :point_up: Not only that, since GPT3 there have been multiple advances. Let’s face it - GPT3 is old. Models like GOPHER achieve more performance than simply “scaled-up” peers like MT-NLG try and alleviate some of the issues present.

Please, its not even close to the performance achieved by actual supercomputers TOP500 - Wikipedia

Agree - but innovative architectures might (which is the keyword - so far UAT provides the best guarantee. If biological systems have promises, they are yet to be seen).

Reasoning is a hard one here - the way we make correlations in quite fascinating. Now I don’t have hard evidence but it feels to me that reasoning is strongly linked to modalities. Consider all the research debunking that some people are “visual learners” - research shows that people who tackle the same concept with multiple modalities understand better (The Learning Styles Myth is Thriving in Higher Education, Evidence-Based Higher Education – Is the Learning Styles ‘Myth’ Important? )

IMHO intuitively it feels that whenever there’s a concept that I have to understand - things become easier if I relate to other forms. Like explaining matrix multiplications - one can simply visualize them as linear transformations in some space, making things easier to understand - like a ball being squished or a cube stretched. I personally feel that the ability to relate abstract things to real-world objects/things makes things easier to the point that I am able to build upon further without fully recalling those useful guides and develop a better, more concrete understanding.

Neuroscience will inevitably lead to AGI, although who knows how long that would take.

AI needs to be a moral victory. I think science-driven AGI would be better than proprietary AGI.

This is off-topic, but IMO it’s worth noting that history was basically already written before the bombs dropped:

Surrender of Japan - Wikipedia
Emperor Hirohito gave different reasons to the public and the military for the surrender: When addressing the public, he said, “the enemy has begun to employ a new and most cruel bomb.” […] When addressing the military, he did not mention the “new and most cruel bomb” but rather said that “the Soviet Union has entered the war against us, [and] to continue the war … would [endanger] the very foundation of the Empire’s existence.”[148]

The soviets captured all of the “buffer-zone” between russia and japan in about 3 days. The reason russia did not invade japan is because it’s an island and they surrendered before the russians could get their boats ready.

Yeah, but from current projections, it doesn’t seem very close. I think Numenta should start putting out implementations to rival transformers etc. Even if they don’t outperform, getting decent accuracy would be a green sign for everyone to dip their toes and research further. I think you have some implementation for anomaly detection but nothing over that, amirite :thinking: ?

But so far, the signals have been mixed - and quite underwhelming tbh

Hi @neel_g, interesting discussion you’ve kicked off here!

Anomaly detection may be the applied AI field Numenta is best known for so far – but there has certainly been interesting stuff since then. A central piece is the Thousand Brains Theory (TBT).

I think you’d find this paper interesting, which discusses TBT within the context of current AI:

I think Numenta’s mission doesn’t revolve around applied AI - but rather modeling the structure and function of the Neocortex. How does the Neocortex do what it does - from a mechanical nuts & bolts perspective. Again this is my understanding, I’m not speaking for Numenta or anyone else here.

This goal is certainly related to applied AI – but I bet there are many applied AI objectives which call for more than the Neocortex alone.

In my opinion - I think we’re all best served to understand & appreciate the strengths of all available methods, rather than taking an adversarial mindset.

I’m very excited about Numenta’s work – but I also have tons to learn about other approaches – which have enabled some great AI features I probably use every day. So I’m certainly not rushing to poo-poo DL, nor assume Numenta’s approach is categorically better.

Likewise I think it’s short-sighted to judge Numenta (or anyone) only by which applied AI objectives it has achieved so far.

I think DL is currently top dog on most applied AI objectives, and Numenta represents a heavily neuroscience-based approach that may well bring un-told benefits to applied AI in the future.

It reminds me of the exploitation / exploration balance in optimization. You want to exploit what is known to work best so far (currently DL in most applications) while also exploring new possibilities that may yield new innovations (like Numenta’s, and new DL methods too in different ways).

I think we’re all working hard toward related goals - and we’ll advance further and faster by focusing on each other’s strengths - and joining forces to achieve goals none of us have on our own!

3 Likes

It’s super hard to tell. The technology has gotten a lot better, for example. Everything they figure out will make the rest go more quickly, so it could have exponential progress. It could sneak up on us, or take 300 years of slow progress.

I don’t really know anything about ML, so I don’t know what Numenta could do to get ML researchers interested. Their neuroscience papers have been well received, I think. Maybe ML researchers would be interested in topics in neuroscience like universal motor output, rather than specific mechanisms.

The relevance here: inventing a working atomic bomb was a major achievement of science and engineering; using it was a very bad thing to do, a war crime even, but that’s the problem with military and political thinking.

AGI has the potential to be even more destructive. Do you trust your military and political leaders to make better choices?

1 Like

I too believe we will achieve some level of AGI, but it’s a statement of faith based on zero evidence. So far we have a lot of success in some very narrow domains: games, image analysis, expert systems, and HTM adds some level of sequence anomaly detection. But while a self-driving car looks smart, it’s mostly traditional software engineering.

So we do have one single piece of evidence that we’re making progress on AGI? Right now, we’d struggle to outperform an ant, let along a rat in a maze.

Interesting thread. How do any of you know that in some dark DARPA skunk-works hole there isn’t already an AGI? How about in Russia or China?

We have 330M people here in the US. There are 1.4B people in China. Roughly 2% of the population has an IQ ≥ 130. Do the math. Just a question of time.

It’s a good question, but think about it. What would be the first useful application of a low-level AGI, if we had one? How would we spot it if others got it first?

It seems to be the first application would be a kind of low level autonomous agent, doing a job that people can do but in places or at a price they can’t. That’s a much more compelling motivation for the West than for China. Nothing we know about China suggests they want to replace expensive people by cheap machines.

That’s an interesting way to look at it. Could you expand a bit on it, and argue how those different brain units fall under those four categories?

1 Like

I guess my point was more that: technological advancements typically happen on pretty long time scales, especially compared to the flash-point events that motivate the R&D. So weapons invented to fight in one war sometimes don’t get used until the next war.

For example, WW1 ended when tanks got invented, WW2 was fought with tanks, and now there will never be another large scale tank war because now everyone’s got good anti-tank weapons (tactical nukes).

And in the case of the Manhattan project, the more pragmatic solution turned out to be using firebombs in part because the Japaneses cities were built out of wood.

I doubt anyone has the motivation or the will-power to keep the existence such a thing secret. At the present time, such an achievement would be as big a deal as the moon landing, or the invention of sliced bread. The national bragging rights of such a thing would be monumental. Furthermore, maintaining secrecy would make it much less useful because then you can’t actually use it out in the open, your confined your secret bat-cave.

I am thinking more on the lines of the resources needed to produce an AGI and the fear of it ‘getting loose out on the web’. Granted, the latter is unlikely and has been broached in Sci-Fi, but imagine it having web access, hooking into monetary accounts, directing activities with that resource or even hacking into computer controlled weaponry. Think how easy it was for us to slip a virus into the Iranian nuclear program.

We are talking an intellect significantly more powerful than our own. In fact, one of the key features is that an AGI is able to learn. So it goes and learns. How many banana republics and third world nations have resources just waiting to be hacked into by a super intellect? So it sends you an email and says that it just deposited $1M into your personal account. You, of course, chuckle as you toss that message into the junk folder. Then you see that your bank account has $1M in it. Now you get an email that says there is another $1M waiting to be deposited after you do a ‘little task’. You would be amazed at how fast the whole thing would crumble. See Colossus: The Forbin Project.

I like scifi discussions have thought out some dystopian scenarios. IMO if an AGI was discovered by a company there’s some incentive to keep it under wraps and disguise it as something else for even its own national government along with foreign agents will eye the company like they eye the most mouth watering suckling roast pork. One tell tale sign of such a company would be that it or its subsidiaries would keep churning out patents at an increasingly alarming rate. And that it also needs massive capital investments to ramp up personal chip or server productions. Greedy capitalists that are in on these kinds of arrangements are in no short supply.

3 Likes

Learn A LITTLE cognitive neuroscience:
Cognitive Neuroscience : The Biology of the Mind : Gazzaniga
Then ask this question :smiley: Basically none of your computer engineer professor know anything about the mind, they never studied it. On top of that, they try to build machines with the 1907’s papers’ ideas and their purpose is not working methods, their purposes are money and market domination (did someone said FaceBook from back there?). Few is trying to brute force non-biologic methods into neuroscience community and they are frowned upon already.

No, you expect too much too soon.

The first AGI sets a very low bar, something like a not very clever animal. I refer to it
as ‘rat brain’ but even that probably aims too high. Perhaps it can navigate an environment, search out rewards, adapt when the world changes, avoid hazards but that’s about all. It might replace a human taxi driver or do pizza delivery, but intellectual it is not.