If I understand what Numenta has been saying the goal is not to make an AGI (Artificial General Intelligence) but to work out the mechanism of the cortex with the understanding that this will provide a powerful tool to understand how the brain works.
Many of us on this forum are working on making the AGI and intelligent machines in general.
I think that Intelligence is one of those slippery concepts that means whatever the person using it wants it to mean. It is hard to define and attempts to provide a definition usually end up falling short of being useful.
One method of definition is a laundry list of properties that will be present in an intelligence. Once you ask about how much of these properties are necessary to be intelligent these lists start to fall apart.
Is an ant intelligent? An ant colony? A lizard? A dog? A pack of dogs? A monkey? A tree?
Is Charles Murray’s intelligence g factor a real measurable thing?
Are there degrees of intelligence? A basket of properties that vary and depending on the strength, different flavors of intelligence?
Does talking and listening make a machine intelligent?
Self-awareness?
Consciousness?
Is a chess program intelligent? My Alexa smart speaker? My Tesla car? Google? Wiki?
When I try to understand what a word means, I often start by checking the etymology. What was the original meaning of the word. And then I sort out if there was enough reason to have this word evolve in its current usage.
Intelligence comes from the latin verb intelligere, which means “to understand”.
So, I guess now I have to figure out what it means to understand.
OK - what does it mean to “understand” something?
Does a compiler “understand” the source code?
Does a chess program/Alexa smart speaker/Tesla car/ant/ant colony/lizard/dog/pack of dogs/monkey/tree understand anything? If so - are they then intellegent?
An intelligent thing is able to construct some statistical model based on inputs… it has desired states for that model based on some external driver and is able to construct internal simulations based on past experiments to change the model to it’s desired state. It is then able to work new information into modeling future simulations based on the outcome of each attempt.
An even deeper form of intelligence is able to simulate versions of itself such that it is able to abstract entire states and formulate plans that are composed of other plans.
I would presume that there is no limitation on the number of layers such that an even higher intelligence could simulate entire societies of mind that are able to contemplate a problem from many different starting conditions.
You start off with a thing that learns to change its feeding behavior based on 4 pixels of sugar concentration and end up with a thing that is able to conceptualize entire universes of civilizations full of intelligences.
Are all of these things intelligent?
Is there a continuum?
I suspect that if you asked most people they would not class your average amoeba as intelligent.
Is there a line where it goes from non-intelligent to intelligent?
Is this rather amorphous definition “able to construct some statistical model based on inputs” intended to include Matlab or most stat programs? They would fit the definition as offered perhaps there is more to this.
Making a technically correct but useless definition does not move the AGI ball forward. Yes, a plant is able to sense light and react to it but it is mostly a useless definition as it does not tell me what makes my AGI intelligent in a useful way.
You dropped off the part about intentions to change the model which is critical to AGI.
An intellect (as opposed to an intelligence) without needs is just a utility that sits in the corner and does nothing until a thing with needs pushes it into doing a thing. It’s the reason that people drugged out on bliss serum (heroin comes to mind) will just sit in the corner not doing anything.
The need to change the model of reality is a core component of intelligence. Hunger and desperation are a feature, not a bug.
So for you a self-driving car is not intelligent? The FTP/IP-(protocol) is not intelligent? A DNA-molecule is not intelligent? I’m not testing you, justing asking for your opinion.
Quick answer: Having an elaborate function does not make something intelligent - so no on the cases offered
.
It you wish to define intelligence we should open a new thread on the topic.
I would consider a self driving car intelligent only if it continues to learn as it goes along, as opposed to having all new learning downloaded from a simulator. Obviously no car manufacturer would want to put an actually intelligent car out there lest it learn some bad habits out in the wilds… at the same time, I think that the learning version of the car that is running in the simulator is somewhat intelligent. If that car was able to conceive of giving up driving and building a rocket ship instead (inside the simulator) then I would say it is approaching or has already crossed general intelligence.
I actually have a theory that humans might be living under similar constraints where the truly intelligent mind is held in a reality simulator with real world actions only being performed after the mind simulates decisions made by multiple different minds all of whom think they are the “real” you… building human like intelligence would be so much easier if our own minds were not hiding so much of the machinery from us…
As for representations of reality, it is possible to make a fixed copy of the same post-intelligent system and know that a given real world object or concept would present the same for both systems.
A system that is intelligent though is constantly changing it’s representations of things (even simply by accessing the things that it is contemplating)… so, any two actively learning systems would need a learning translation layer between them that is able to keep up with the shifting nature of the representations.
Would you consider it intelligent if it was trained in Los Angeles, and without further training, being dropped in New York, would still be able to drive without problems?
I can pull my hand away from a fire, brush my teeth in the morning and drive most of the way to work without invoking intelligence for those tasks. To me evolution and plasticity in the face of change is a core function of intelligence. Once the car’s “brain” has been frozen in time, it becomes nothing more than a very complicated reflex.
In you r example, in LA there is an intelligent car mind, and in NY there is a brain dead copy of that intelligence.
As for consciousness and all that silliness, until someone tells me otherwise, I’m just going with EM fields have a first person perspective of the universe… water has a particular form of that because of the way it interacts with itself and we are essentially bags of water walking around isolated from the rest of the environment.
Your conscious self is a water passenger on a massive robot hive mind built by bacteria who were tired of being stomped on by other more advanced bacteria with fancier chemical weapons.
I don’t think that your watery “hive mind” proposal gets me any closer to writing a functioning AGI.
I do have a fairly mechanistic take on how intelligence and consciousness works, outlined below.
And yes - it is also another “basket of attributes” answer without actual detailed mechanisms.
and
Since this is the thread for this sort of thing - what have I gotten wrong in these posts?
I must admit, reluctantly, but honnestly, that understanding something conveys a feeling in me. Some sort of chemical reward. Perhaps a slight dopamine boost, or a minute serotonine high. I feel happy that I found a solution.
I also have to admit that a machine or system without dopamine or other neurotransmitters, probably won’t feel this rewarding feeling.
And I have to admit, that it bothers me. I’ve been thinking about this for the past four hours. And my answer is not very scientific. Perhaps not even rational at all. I am not very comfortable talking in emotional terms.
But here’s the thing: is this rewarding feeling a necessary element of the understanding? Or is it a complementary effect of the mechanistic process that happened in my head while I was understanding the something.
And can a system without chemistry not come to the same situation as me, and start behaving with the newfound logic that we call understanding. Just like me and my chemistry do?
I do think that as much as most newbie AGI researchers would like to pull emotions out of consideration as an un-necessary complication - I see them as a feature and not a bug.
Note the key phrase:
In the Rita Carter book “Mapping the mind” chapter four starts out with Elliot, a man that was unable to feel emotion due to the corresponding emotional response areas being inactivated due to a tumor removal. Without this emotional coloring he was unable to judge anything as good or bad and was unable to select the actions appropriate to the situation. He was otherwise of normal intelligence.
Episode 4 of The Brain with David Eagleman included an interview with a woman named Tammy Myers who had an accident and her emotion systems (though still working) became disconnected from her logical systems. Her case hints that even in the most basic situations, emotion may be a necessary component of decision making.
At a grocery story Tammy could explore various options, talk about them, and make logical comparisons. But ultimately she could not chose what to buy. Emotional flavoring allows us to place a value on each option when making a choice, so that a decision can be made.