Will we ever see AGI?

Heyo everyone!

My name is Alec, and I’m currently writing a dissertation for my A Levels. I’ve gone with the title “Will AI ever be considered intelligent” (Might change it to focus more on AGI) and Numenta have become a large part of it. I was wondering if I could get your opinions on whether we will ever see AGI in the future, through Numentas work or otherwise.

Thanks for your time!

1 Like

Short answer - yes!

Slightly longer answer - at this point various efforts are modeling small parts of the brain or core features of the way that the brain works.
Even with these fragmented efforts “we” are seeing impressive results. The current superstar is “deep learning.”

These are early days. Deep learning is a finicky narrowly focused tool that requires vast expertise to get it to do anything useful. Other current technology has similar limitations.

When more of the brain is emulated (especially sub-cortical structures) I expect to see huge strides to the point where even the biggest detractors are forced to admit that the machines are acting in an intelligent fashion.


You can say that from what I learned artificial Intelligence is like Artificial Flowers. They may look real but biologists would not put them under a microscope.

Much of what is in AI is not at all like a real brain works. For example AI keyword based systems like ELIZA.

This forum is for computational neuroscience, modeling of real biology, which is inherently something else entirely.

I wrote this to explain how our trial and error learning intelligence works, and expresses itself:


More on the model here:

How that further relates to HTM “prediction” and temporal memory starts here:

There are already models that can be considered intelligent, but they’re not AI and searching from buzzwords like AGI will not help you find them.


My persional opinion is… yes.

Even if all ML attemps to become AGI failed due to some inherent properity of the biological neuron that is not captured by algorithms. (Normally ML algorithms have a very simplifed model of neurons). Projects like SpiNNaker and TrueNorth has the can simulate millions of real neurons in their current stage. And are hightly likely to simulate a entire human bain in the future.


Yes, and the promotional hype for AI data-mining applications has left the general public wondering what went wrong with their predictions.

For me the statement “Will AI ever be considered intelligent” is an oxymoron and is the same as asking “Will artificial flowers ever be considered real flowers?”

The truth is that IBM Watson already years ago proved itself by winning at Jeopardy, and easily met all requirements for intelligence that I have. But it’s like an intelligent system has to heroically save the world from Donald Trump, or ruthlessly destroy the planet, else it’s not intelligent.


Gary, I have to point out that the A of AI is artificial. If I ask if plastic flowers are artificial flowers then - yes - they meet the requirements; nobody is expecting them to be real flowers. They can look enough like real flowers to fool the casual observer. That is often sufficient and has useful purposes and occasional advantages over the real thing. I suspect that the same will be true for AI.

The AI of the future will add the command and control features of the sub-cortical structures and in doing so, add the flexibility, intentionality, and judgement of the old lizard brain.This is sorely lacking in today’s AI efforts.

Mix that in with the huge databases that are available and the scripts of human experience and you should have something that has a reasonable simile of common sense.

Perhaps even to be thought intelligent.

1 Like

Does it need to be pointed out that HI (human intelligence) has yet to be proven to be HGI (human general intelligence) and is likely to be proven to be less general than people like to believe?

When AGI come on the scene, we are very likely to find out just how NOT general our own intelligence is.


I think of “intelligence” as a systematic behavior, for example a “sine wave” oscillation. It’s possible to produce the behavior using a tuning fork, guitar, piano, electronic circuit, or a computer where the sine wave can also be drawn on the screen to show what it looks like. All will still be a “real” sine wave, in fact a computer can remove the harmonics other systems add to it that distort the wave.

Intelligence is a behavior where even a tiny bit in microscopic self-replicating molecular systems (such as self-replicating RNA) can over time result in development of all the biodiversity that now exists on this planet. Simple trial and error learning is a very powerful thing.

AI, AGI and such now have so many different meanings that they have become meaningless. I stay way away from them.


It might be worth including that whenever we developed sufficiently useful/applicable algorithms that mimic some part of human intelligence (such as object recognition, for example), there are always folks who will say “Well obviously THAT isn’t AI. That’s just fancy statistics.” It creates a moving goal-line where each development, as it becomes common or regular, is seen as not being AI.

For the sake of being able to have a fruitful explanation at any length, you’d need to carefully discuss and then decide upon what you will use as your definition of AI, much less AGI. In the end, I believe we’ll get something like AGI, and there will still be people saying it’s just clever programming, and that the next goal line will be “It doesn’t have emotions or produce works of art like us.”, and that any such productions via AGI will again just be labeled as random noise production from a complicated network (akin to how some criticize Generative Adversarial Networks).

So like anything, define your semantics, lest they be used against you :slight_smile: .


Yes, my experience in Ray Kurzweil’s (no longer online) AI forum was (mostly for Deep Learning) product versus product marketing ads. Startups would often promise to finally bring AI/AGI/etc. to the masses. The only person I can recall talking about how IBM Watson or earlier simple model of “intelligence” by David Heiserman works was myself. Everyone had unique expectations. For those who get a thrill from online shopping and wanted something to talk to that loves the same Amazon later saved their day with Alexa-AI, which is purposely kept limited so that it does not demand a paycheck, or personally buy and ship itself things to a robotic warehouse to self-replicate an army of robots to ultimately control the world where only one place to shop at is left undestroyed and customers are expected to pay a share of the bill or no more online shopping for a week!

The Kurzweil-AI mission of course included superhuman level intelligence to save the day, and we no longer need to work a day job yet collect a universal income, not insect-like critters I program or something already achieved. The age of Deep Learning did not seem to overly help Ray who during that period wrote a book with his thoughts on hierarchical structured cortical memory for words and sentences, which applies to search engine type input. Ray ended up a Google head engineer. But his thoughts are not exclusively a product of Deep Learning. From what he had/has he could now be here helping to work on modeling a human brain with us, even where in time that is not exactly what HTM Theory predicts, and to be taken seriously in neuroscience these days we need to be more than modeling AI, which never had a requirement to work like the real thing only has to appear to be intelligent.

Instead of creating a new buzzword for something old, Jeff Hawkins premised/hypothesized a (at least to myself) novel theory where each element had a like looking through a straw view of the world. By moving the straw around one can make out everything in front of it. Task is to computationally model such a multiple-brains model. The title of a theory like “HTM” can be anything, so even where the “H” part was in its day somewhat of a buzzword too that does not matter. Theories are tentative, subject to change depending on where following the evidence ultimately leads. In this way the “power of science” is at work for Numenta, not big-budget public relations departments. This has a down side by not having immediate rewards and can in time go broke from having to fund our own research. But at least we all become well remembered in the “mind of science” for having lead to something that is scientifically real, at work through this forum by its being such a wonderful incubator for new ideas. This includes the evolutionary sciences where the same thing that works for the most vocal in that field needed to be repeated here too:

Modeling using “behavior” makes it possible to not be fancy with molecule by molecule detail, only has to behave the same way using the same basic parts as in biology. Hard part right now is lack of a language that makes it easy to code parallel autonomous entities:

Alec, you at least found your way to the right place and time for useful information in regards to what is “considered intelligent” to a neuroscientific model/theory with thousands or billions of brains (inside cell bodies with all sorts of appendages like animals have) not one brain. You have the right idea for a dissertation, so I made sure to best I can sum all this up in your thread, for you to work on. The title was close enough for me to know what you needed. It’s easy enough to (at least my opinion) make the title way more attention getting, which is another good thing to have working for you.

At this time we are able to change preconceptions of “intelligence” and the reward for your seeing all the science fun in that and following through makes you for real a pioneer of neuroevolutionary biology, hero. I found it best to be specific as to the possible level of intelligence you are talking about, either “genetic/molecular level intelligence” mostly inside the nucleus of a cell, “cellular/cell level intelligence” around the nucleus that powers it from place to place, or your “human multicellular level intelligence” brain that moves the entire cell colony around. This way the power of science will be strong in the way you are more precise than others and at the same time account for all possible levels of intelligence that can exist. Otherwise most who know little or nothing about the now bygone “AGI” days will not even know what it is and be missing what I believe it most needs to be an A++ dissertation.


Wow thanks for the replies everyone! This is some really interesting stuff and I’ll be sure to factor it into my discussion.