10-15 Year Plan to Artifical Human Intelligence? Do you have any system proposals?

Sorry, but evolution is another idea that won’t work. Genetic algorithms do well when they mimic adaptation (adjusting parameters to choose from known features) and are hopelessly slow when they try to use mutation to generate new features.

The feasible path is that we study biological systems and use science and engineering to create an AI with similar capabilities (say a lab rat). This AI is not at all human, but it’s rat-smart and has access to the entire Internet, vast quantities of sensory data, and runs a million times faster with vast quantities of working memory… Set that AI the problem of how humans work and it just might find the answer we cannot.

As it happens, that’s the path we’re on, whether we know it or not.

1 Like

I fully agree with that perspective as there are no current approaches, which create adaptations of “significant” architectural difference (i.e. adding on a whole new block of architecture akin to adding a Hippocampus). The architecture of the way the brain works and is integrated I don’t think any GA approach can achive success any much more than the proverbial monkey at a typewriter creating a litterary work of art before the universe cools close to absolute zero.

My perspective is maybe a little too out there because I believe that it will be the first “basic” AGI which will (help) create the next, not any simplified GA type approach. Humans will then be sitting at the proverbial typewriter, alongside the monkey, trying to create the next work of art. Again a bit too out there, just my perspective.

HTM/TBT is the best theory of how “part” of the architecture needs to work, but I’m not in the camp of fully replicating spiking (full biology) behaviour because of the resulting level/ degree of complexity later on in the process if you go down that route. HPC architectures will be able to implement the raw compute and interconnect but I think it will be a less optimal route at significant scale. Reducing bit widths/size for the sake of data compression to fit a model in memory, to create an extra bit of “simple” scale, I think destroys a critical part of the temporal nature as to what needs to be implemented, primarily in memory formation. Again just my perspective and conjecture I’m trying to program.

The research is needed and awesome as to what it achieves and is as you say is the work that is needed to make real progress that no amount of thinking can achieve. Im 100% in the camp of the think->study->engineer->repeat loop. Studying the biology is the only way to get there.

1 Like

I think it’ll at least be a lot like how computers became so advanced. Like dmac basically said, an entire industry will form around brainlike AI once it gets going. There’s no comparison between a small team like numenta and thousands of people (and lots of money being put into it). Also, AI will be a tool to design AI. I think it’ll be pretty varied, but trial and error seems like a reasonable way to use it. I think it’ll also help us understand neuroscience and do that research.

Just so. We can pick up everything we need by studying biological systems in the lab, and an AGI at the level of ‘rat brain’ would be awesome: massive access to data (the Web), sensory input (pictures, video, sound, etc), gigantic working memory, massive speed. A rat that knows everything you do but thinks a million times faster is a fearsome thing.

I think a rat can’t help you make a dog or a monkey. If they could, the real ones could be helping right now (one might argue the experiments on them are them helping, but whatever). Rats don’t even see the world as we do. Their motion detection capabilities are just “weird”.
And with rat AI I believe we can’t even aim for the innate tendencies of humanity to rediscover the common symbol manipulation schemes (language, number system, analogies, search etc), with a system that cannot generalize to that level.
Those kind of abilities are “gained” with social interactions through evolution. Going that path, we have to create an entire simulation ecosystem just to select the best brain architecture, best regulation methods and best learning rules, WHICH would take 100 YEARS!
I believe we can “cheat” a lot on the system that is proved to work, the human brain. I accept that the first 15 years we are going to have to deal with (excuse my language but) literally “spastic”, “autistic” and/or “schizophrenic” baby AI’s in virtual environments (maybe in VR), but that’s just Tuesday if you are working with Deep Learning architectures. There is literally a field called “Domain Adaptation” that tries to fix algorithms that don’t work, as in, the original authors have missed just one tiny error, for god’s sake.

If a rat lives a million years, it’s gonna be pretty weird. Also, AI wouldn’t have the same problems with thinking we do. We have small working memory, forget our thoughts, and have poor imagination. Those make us terrible at breaking things into pieces. Rat-sized pieces are very small, but it has all the time in the world. Hopefully it can reach conclusions with such small steps. If so, it’d probably be superintelligent.

MAYBE it is necessary for a smart animal to have small working memory, in the end your attention is limited and select almost always “one thing” (depends on how you define it) and other things leave attentional residues to get back to them later. Maybe 1 object and its features are needed to be propagated one at a time because there is no way we can clear our thoughts and focus on the job, model it and imagine it. Maybe 2 attention makes people (a big hyperbole but) a little schizophrenic or lose control of the hand-eye coordination
In the end, the top-down information needs to confine something to be functional. How do you confine 2 independent things with a top-down information? It may be an unsolvable problem (aside from going back and forth between 2 things very fast, which will introduce inevitable delays).
Maybe investing in the hand-eye abilities are far better because if you express your thoughts on a surface it stays there and you increase your memory in a massive way, maybe your consciousness is distributed around you: your notes, your hard drive, your room.
And I will strongly disagree about human imagination. We can VERY VERY effectively select the end goal, the way-points in between and roughly calculate what we should do now to get to next way-point. Again, maybe you shouldn’t try to imagine that much because your modelling capabilities are very limited (because of memory) and your perceptions are excellent, therefore modelling as you go first gives you a fresh update on the thing you are after (everything is time-dependent after all) and your perceptions can handle it anyway. Maybe if you get detached from action-perception loop too much and stay in the cognition loop, you are no use.
I believe biological constraints like these at first hurts the organisms, but the organisms that learn to leave with it and thrive find a way to make that constraint an advantage, like time, geometry, sugar consumption, death, reproduction, specialization, dependency on other cells.
This is literally the thing HTM advocates: geometry and time delays are the part of the cognition, they are not something orthogonal to your processes, THEY ARE the processes.

2 Likes

I disagree with some of your points because AI will be a million times faster than us.

I meant perceptual imagination, but that makes sense as a bottleneck. Maybe we could use machine learning to get around that by mimicking humans.

1 Like

I beg to differ. If actions, neuron number and time-delays are the crucial parts of the intelligence, then a server won’t give you the time specs you want. We have no idea how complex the non-synaptic plasticity is anyway. It may not be compatible with distributed computer architectures. Distributing the information to neurons isn’t solved yet anyway.
And in real life, even a 4nm chip process in the future won’t give the half of the neuron you want (even 1.8 nm chip process may not give you what you want) in a body. We are currently at 5nm. Humans use literally atoms to propagate information. And the neurons are waaayy crammed up. Also human body is light, compact and energy efficient. Its cooling method is magnificent! And current clock speed on tech is 4GHz and it heats up FAST! If the AI will learn in real life, it’s gonna be one hell of a stress for the body.
Moreover fast doesn’t mean better. If you have limitations for your next move, thinking about it for 1 second and 1 week may not change the outcome. A mediocre chess player cannot beat a master even if he can think 100 times faster than the master, because the master has fine-tuned expected utility and memory when it comes to chess.
And even if those are solved somehow, the memory requirement to be that fast may literally make you crazy. The people with photographic memories want that curse to be over because remembering that well and that much literally cripples their work, they day-dream sooo much.
In the end, I believe for the year 2021 throughout the history of earth, we humans are the best possible entity when it comes to problem solving on anything. We make societies, people work, document, produce products and we solve problems with those. That’s it. AI may break every human records and gather every type of greatness imaginable in one body but it cannot be order of magnitude better than us. It may be like %1 better than the best people on Earth.

2 Likes

I think introspection is meta-thinking: thinking about what it is to be thinking. (But I agree that most people rarely do that).

If you mean you as an identity, I agree. But in my opinion that’s not the real you, because it is very difficult to define where you start and end. Your identity is in perpetual change. It is a construct. It is a tool to help the real you make sense of what you experience.

The real you, (in my opinion, and I suppose you’ll disagree) is the conscious you; the passenger in your brain; the observer. But also the one who gets to enjoys the ride.

The conscious you has no impact on your behavior, so in that sense I agree. But the value-charged knowledge does. And if we want to study intelligence, I guess we have to understand how to encode values too.

I’m not sure what to make of this, but it sounds scary. Could you expand a bit on it? Especially why you think that. Is it only in the academic sense of understanding intelligence, or do you feel like that in general?

1 Like

Exactly (on both counts).

Exactly. The brain in all animals has evolved to derive meaning from sensory input and perform coordinated movements in response, over hundreds of millions of years. The cortex is a recent addition evolved much faster, much of it by simply replicating the hardware (columns) and adding software (algorithms). Cortex gets the keys to the car. If we focus purely on cortex and how it connects to the old brain, we can crack AI.

[This topic is old and tired – anyone for a new one?]

That’s all thinking: VC is meta to LGN, which is meta to retina, which is meta to rods and cones, which are meta to photons. So, that meta is meaningless.

So, you have no alternative definition.

It’s just a bunch of fuzzy analogies. Your consciousness is working memory, which is changing by the second. You can say that identity is layered, with inner layers more stable, thus more “you”. But that’s degrees, not a definition.

Because I actually define GI in functional terms, as unsupervised learning / pattern discovery. This is not a value judgement, values have other functions, unrelated to GI.
Most people here seem to think that GI is whatever the brain does (A in AGI is ridiculous in this context). That’s defining in terms substrate, vs. functionality. It’s a lazy, dirty, holistic thinking, which will never get constructive.

2 Likes

There is a reason why certain words are created. It is to relate differences in concepts. The word “introspection”, (from the latin intro specere: to look within) has been re-appropriated in English specifically to be used when relating to one’s process of thinking. Other uses of the word thinking do not relate to introspection unless additional specific precision is given.

And while I agree that thinking about thinking uses the same cerebral mechanisms as thinking about what you should have for breakfast or how to calculate an escape trajectory, there is a perfectly good reason why someone has come up with that word: it is to be clear about something without having to write four forum posts about it.

And sure, it’s turtles all the way down. But how do you communicate if the only concept in your vocabulary is turtles? Not all thinking is introspection.

I don’t have a formal definition. I’m neither philosopher nor academic. But to me, there are clear differences between identity, consciousness and intelligence.

No, that’s the content of consciousness. That’s identity. It becomes clear when you consider these situations:

  • you remember a lullaby you heard when you were a child.
  • you observe the shape and brilliance of a raindrop on a window pane.
  • you plan how to take your kids to school, return the tools to your dad, take your car to the car wash and pick-up groceries for tonight’s dinner knowing that all the locations have different time constraints.
  • you realize you forgot to phone your friend yesterday to wish her a happy birthday and feel upset about it.
  • you imagine how you would strike the dragon with your eldritch sword, mounted on a flying horse and clad in shining full plate mythil armor.

Those instances use the mechanism of your consciousness, but a different part of your brain directs a stream of information (i.e. content) towards that mechanism. For some you need your eyes, for others you need long-term memory, while others require predictive intelligence.

Lesions in certain parts of the brain can limit or even totally prevent certain of those instances while still allow others. The consciousness remains intact, but the different contents change. Your identity changes but you still feel you.

Well, I certainly agree that intelligence can (and should) be studied separate from consciousness. But I don’t think it’s possible (or at least don’t understand how to do it) without considering values. Is a delta in efficiency not a value? And are all values in essence not delta’s in some metric?

Of course, if you go down that rabbit hole, one would wonder why consciousness exists at all.

1 Like

Good examples. Two features of C are the internal narrative, which is why C needs language, and a navigable mental timeline. The narrative gets started the moment your external voice (see video) begins. If things go well, at a certain point the external voice goes internal and Voilà! Consciousness. You can introspect back to your earliest memories and watch the movie unfold as you replay your life. How well you can do this, and this is just my opinion, is a measure of how conscious you are.

This has pretty much nothing to do with intelligence (up to a point) and it also does not require vision or hearing. Using your examples, a machine can do everything you listed, but it has no self-awareness of it and no context…yet. Frankly, I find the beauty of the HTM model is that it can construct this.

1 Like

Do you think that language is necessary for consciousness ? When I think of a dog (for example), I see a brain that has a mental timeline but no language. Mine just used her memory of stealing a biscuit 15 minutes ago - and the context of me realising it - to extrapolate ahead in time to being punished. She has some problem solving ability (it wasn’t a particularly easy biscuit to steal). She recognises some human words, but I’m pretty sure she doesn’t have language or an internal voice as I’d define it. To me it seems that she ticks enough boxes to be called conscious, but not the one for language.

2 Likes

C is sort of tricky. In my work, I define C as the ability to narratize in a mindspace, hence the language requirement, and to be able to do mental time travel, which is related to narratization. These both require that a complex internal world model be created and overlayed with a syntactic/grammatical language system.

Your dog is self-aware, it knows itself. It also has the ability to communicate with symbols, but that is not at all language. When you say ‘ball’ it looks at the object that you have identified in the past as a ‘ball’, but it is not able to extend the concept with metaphor. It also is locked in a fundamentally timeless space and can only deal with time in an immediate sense and strictly just stimulus response.

Dogs also have emotions, but even though they have fear, they do not have anxiety. You need C to have anxiety.

if you are really interested in this, forget the dog and study Pan troglodytes. If we were unable to develop C or the language necessary for it, we would be pretty much exactly as they are.

1 Like

Sorry, I don’t feel like continuing this navel-gazing re introspection, identity and consciousness. These things are distraction, both obvious in themselves and irrelevant to GI.

I meant representation-specific values, such as instincts and conditioning in humans. Specific = non-general, thus unrelated to GI per se.
GI is a mechanism for maximizing general cognitive value: predictive power of the system. Which is a projected match of representations to future inputs, basically recognition, which should be measured as a lossless component of compression in representations.
Efficiency is not an independent value, it’s a value / opportunity cost.

I would like to test your concept of GI by seeing how it fits in with some questions about it.

Assuming that a brain is a brain across most of the higher level critters, does the rest of the animal kingdom have this GI thing?
Are there measurable degrees of GI?
How would I go about determining this GI thing?
What are the dimensions of this measurement space?

Are alligators maximizing predictive power?
How does this compare to the behaviors built in by evolution?

When a squirrel learns and runs one of these intricate mazes I see on youtube is it the same thing as a human solving a puzzle room? Where does GI come into play as I compare these two scenarios?

1: of course
2: I just defined it
3: depending on the design, you have to look into internals. The match is projected, so you have to sum it across all levels cognitive hierarchy, not just look at the external performance. I quantify it at each step of my design, but such evaluation is far more coarse in the brain. Bottom line is, you can only measure, however crudely, intelligence that is inferior to your own.
5: it is built in by evolution, as an ultimately general instrument. All other instruments (including what passes for “values”) are biologically specific.
6: again you have to look at mechanics. Squirrels have GI too, just far less developed.

2 Likes

In practice how do I do this?
Is this just as un-measurable as introspection?
If that is true do all of your critiques of introspection apply?