One of many roads to AGI (article on Medium)




I bumped into some Jeff Hawkins quotations in this medium article: scientific method AI.

In this and his other articles he tries to convey is an interesting philosophy on AGI.


Wow, the author of that article believes that Numenta’s approach to understanding intelligence is not only the wrong one, but ultimately one that future generations will look back and laugh at. I think he is missing an important point – sure we can optimize on nature’s solution to a problem, but you kind of need to understand why a bird has wings in the first place before you can design more efficient winged machines.


I don’t think turning or returning the birds vs plane analogy could drive the point home if I was to consider discussing with that guy… obviously he’s aware that JH uses the same story, and reaches a different conclusion.

Now, he’s proposing that we settle on a theory of knowledge creation instead, or ask questions in that sense… Okay why not, but… well, nobody has the answer to that either. Hence the loop to brains, our only example of an implementation achieving that.
Cuz other than this… well I’m quite found of anthills myself and their distributed approach to achieving that… but anthills ain’t representative of what I would call knowledge creation.


I kind of took away that we do not yet have the equivalent of aerodynamics in AI yet and that current initiatives are a way of discovering the principles behind intelligence; so necessary but not the ‘end state’. AGI may look different to current DNNs as a supersonic flight compared to bird flight. HTM is allready quite supersonic to me but who am i.

Furthermore I wonder if ‘intelligent princples’ + ‘scientific method’ will suffice. The whole idea of automating and superseding Stephen Hawking or Sherlock Holmes seems a reasonable test to qualify for a super human AGI.


Why the zero sum approach again and again? As a computer scientist doing cognitive science PhD. with a neuroscience focus, I find this attitude saddening. I encounter this rigid stance at Computer Science department as well as Cognitive Science department.

And that bird vs plane analogy that I always hear… The ironic part is, this gentleman says cognitive science is not necessary because of this analogy. On the other hand, a cognitive science professor here uses the same exact analogy implicating cognitive science is what is necessary to derive those “laws of aerodynamics” rather than neuroscience or even machine learning. A math professor giving a talk about chaotic systems and dynamical systems argued on the same thing, that he is after laws unlike cognitive science or machine learning.

The goal isn’t human intelligence, it’s knowledge creation. It may be the case that we’re unable to traverse the distance to artificial general intelligence without a deep and thorough understanding of human intelligence. But the destination isn’t learning or prediction or consciousness or any other component of natural intelligence, whole or in part. Our goal is a machine capable of creating revolutionary scientific knowledge.

We cannot even agree on an AGI definition let alone what the goal of any attempt should be. I want to understand us, not some other machine or network capable of knowledge creation. What is wrong with that? Even if my goal is what you set it to be, who is to say brains aren’t the fastest way to get there?

The science world feels like cryptocurrencies a lot of the time. If you bought a coin, you got to shill it and downtalk the rest. The only difference is the intellectual level that this is done.

Rant ended :slight_smile:


I believe the first step to understanding the “distribution” force multiplier when it comes to human intelligence, is to understand that our ideas are themselves creatures competing for resources in an evolutionary process. This process accelerates along with better forms of communication and information storage and retrieval. An excellent introduction to the concept is Dan Dennett’s Dangerous Memes on TED Talks.


true, dat.

Paul: Point taken, but can you work this philosophy out and implement an AGI from this ?

Take care ! Down this road is W. Calvin :dark_sunglasses:
You’ll turn out as a prophet for the ‘grid’ meme
You’ll need to change that avatar to a Matt-like “it’s hexagons”
You’ll need to hack a “give-two-hearts-in-a-row” button below each of @Bitking’s posts


I guess my point was that there isn’t something special about networked humans when it comes to intelligence, other than we are enhancing a mechanism that natively exists in our brains. So studying the network effect alone IMO is not any more likely to lead to AGI than studying a brain.



Why does the goal of an AGI have to be creating scientific knowledge?

How about a skillful interpretive dancer?

A general farm worker? (does not know any science but it sure can pick lettuce!)

For that matter - song writer or poet?


One of those Maslov’s Hierarchy of Needs-type things I imagine? Seems so intuitive that I didn’t even think to question it until reading your comment above!

I guess to the extent that we can control humanity’s environment and future, we’ll be freed up to expand our artistic expression as well?

But first things, first! (Just kiddin’)


Thanks for the feedback, all. Just a few quick clarifications:

It wasn’t my intent to suggest that people will look back and laugh at human-imitative AI generally, or Numenta’s approach specifically. The point was only that the eventual embodiment of general-purpose knowledge creating machines is likely to differ substantially in its workings from natural forms of intelligence. We snap to human-imitative conceptualizations because they are proximate and intuitive. There are, however, other systems of knowledge creation from which we can draw inspiration, such as networks, swarms, markets, social media, and our institutions.

The exemplar that I draw on for inspiration is science, as the only example we have of revolutionary and consistently progressive knowledge. In this, I’m highlighting the quality of scientific knowledge as the goal for our knowledge-creating machines. In this frame, the methods and approaches to AI, such as Numenta’s, represent a means to an end, the end being the production of this quality of problem-solving knowledge. This isn’t a unique opinion. Gary Marcus, a proponent of human-imitative AI said that what science needs most is automated scientific discovery. Similarly with Demis Hassabis, who said that once we have AGI, “use it to solve everything else.” And what does he maintain we need most? “I’ve always hoped that A.I. could help us discover completely new ideas in complex scientific domains.”

It may certainly turn out that human-imitative AI charts the fastest path to that goal. Given the diversity of AI research, breakthroughs will emerge from other areas as well. But what I find most inspiring is the end, not the means: How everything changes once we have machines capable of creating revolutionary scientific knowledge.

Gary Marcus:
Demis Hassabis:


Here is my path to AGI:!topic/artificial-general-intelligence/0rHVcqNoFG8


Thanks for the article. The question I now ask myself is - would we have figured out the science of flight if birds did not exist? Probably, because we would have eventually been studying aerodynamics anyway, and through enough experimentation we would have discovered lift and flight. If we didn’t ever know about brains, would we ever figure out the science of intelligence? Probably.

If we had the scientific study of aerodynamics to indirectly discover flight, what study or field do we have to indirectly discover intelligence? Are there general principles of intelligence that are outside the domain of neuroscience, cognitivescience, etc.? Indeed there is, but they do not drive us faster towards the theory of intelligence without the overlapping principles found in neuro/cognitive science. In other words, the search space for discovering the principles of intelligence is reduced within the context of studying the brain. Again, if birds didn’t exist, we would have eventually discovered flight through the indirect study of aerodynamics, but we would have discovered it a lot faster if we studied both. I feel this is happening in our journey towards discovering intelligence anyway. We have a lot of ideas converging from different places, but the overlap with studying the brain really pulls out the underlying principles.


That’s a thoughtful question. What haunts me is that this quality of scientific discovery is such a recent invention, not a naturally occurring phenomenon. And only more recently still have we discovered the basic mechanisms for how that happens. It’s for that reason that people such as David Deutsch believe the breakthrough needed is more of a philosophical quality (that is, a deeper understanding of the problem of creativity).

These creative aspects are emerging as a key pinch point within other areas of AI. For example, consider statistics and probabilistic reasoning. In this frame, the major barriers to progress are system interventions and counterfactual reasoning. Judea Pearl recently popularized this topic in The Book of Why.

All of this to say that solutions and roadmaps are illuminated by the very specific functional gaps in existing systems, gaps that are much more specific than a complex system such as a brain. And the convergence on these functionalities from radically different spheres (such as philosophy, cognitive science, statistics, etc.) doesn’t serve to elevate one specific field as privileged. Nor does it suggest that even the questions, let alone the answers, will emerge exclusively or even predominantly from the study of human intelligence.


All watched over by machines of loving grace.


I agree, if we are searching for the holy grail AI that can solve all our scientific problems then it does not have to come out of neuro/cognitive research. The principles of intelligence could be highly abstract and could possibly be expressed simply by pure mathematics one day (without any analogy to the brain). The route we take to get there could be many. Although, it seems studying the brain is the most direct.

If we were to take various mathematical ideas (statistics,probability,calculus,algebra,etc), they are useful for many things in many different contexts because they are very general. But when we apply them in the context of neuroscience we get ANNs (The first ANN was invented by neuroscientists). The core of machine learning theory is based on statistics and probability (which dates all the way back to Bayes). So I believe the principles are emerging in time, but are being magnified as we look at the brain for guidance.

A thought on scientific discovery - although it is not a direct natural phenomenon, it is the product of the brain. Even without collaboration we are all implicit scientists. We all explore, experiment, observe, analyse, conclude and question the world around us. So do other animals (to some extent), they just don’t write it down and publish it. The brain’s ability to learn, recollect, inter-compare, synthesize, analyze and abstract data from the world is exactly what we want for an intelligent agent so solve our problems. From this perspective it seems intelligence has more to do with computation than statistics/probability. Ironically computer science was born from a man who was trying to discover intelligence.


I think that it is fair to say that most scientific discovery starts with someone looking at something and saying: “Hmmm - that’s odd”

Scientific discovery is learning followed by anomaly detection. Rinse and repeat.

This is distinctly different from “applied science” which is just engineering. Looking at a bird and saying that I will build a flying machine - that’s that’s NOT science - that’s engineering.


It’s true that scientific discovery may start with an observation and anomaly detection. Quantum mechanics and observations like black-body radiation, for example.

It’s also true that discovery arises from the integration of good explanations. Relativity, for example, where theory preceded observation and anomaly detection.

And to your point, first there’s a theory of how we fly, and then we fly. This is starkly different than the idea of reverse engineering some natural analogue.


Hmmm, that’s odd. What could explain that?

This is widely believed to be where Einstein started from.
Even if you doubt this connection - in his own recollections he has stated that he had trouble reconciling electrodynamics and what was the current state of understanding of absolute and relative motion.

Perhaps a different example?


The (already experimented) constancy of the speed of light was standing as a preexisting anomaly.

A theory of how we fly and then we fly… true. Maybe first successful plane engineer started from only the theory, but the theory itself likely stood upon the shoulders of some bird watchers, don’t you think ?

Now, to your point that there is more than brain’s example for establishing an underlying theory for AGI… well you could be right. But as far as I’m concerned I don’t see any better. And if a theory of knowledge had been successful already, we’d have LISP-implemented AGI all over the place.