10-15 Year Plan to Artifical Human Intelligence? Do you have any system proposals?

Hi, I know the forum exists for a reason: comfort, documentation, reachability, searchability, diversity of questions etc etc. But my question is:
Can we gather here or in ANOTHER form, to give proposals and listen to proposals, regarding to the full path to Human-Level Intelligence, from start to end, stating every holes, things that don’t work and how things must work, in a System Engineering type of approach? What should even the titles be, the classic cognitive neuroscience book chapter titles, more, or none?
I am a great fan of Matthieu Thiboust’s work on “Insights from the brain” and I wonder can there be “Quality Control” and “Acceptance Tests” steps combined with Thousands Brain Theory and Thiboust’s work, even if it has more like “necessary and sufficient conditions SO FAR” type of readiness, but in ALL topics regarding what is to be human in the Marr’s 3 levels or system engineerings’ 5 levels, from synapses to social inference to metacognition?
I have a “proposal” for it (define proposal, am I right), ready within 2 months. It is like Thiboust’s work (Thank you soooo much Thiboust!) but with a limited System Engineering twist injected into it, you know “how can you assess one of your sub-system isn’t working or how do you combine them into a system level” type of questions and answers.
To quote popular culture wisdom: “You are but one man” & “It is dangerous to go alone, take this!” So I wanted to ask you, guys! Thanks.

1 Like

What would give you reason to believe this is a feasible goal?

If it were, would you not expect by now to have a clear foundation and set of fundamental principles, broadly agreed?

If you did produce a generalised AI, even if at a lower level than human, how would you even know if you didn’t at the same time solve the problems of sensation, motor function, vision, language and/or memory? Or did you think those were solved already?

Well, let’s not get into the argument whether if human brain is a computer, a laptop or a headset. I think it’s feasible because I don’t think there are too many tricks human brain pulls: memory, encoding, path integration, navigation, attention, winner-takes-all etc etc can all be seperately investigated in my opinion.

Well if I weren’t using my real name here, I could give a piece my mind to famous AI researchers :smiley: They are wasting students’ lives over their … Anyway. But I think anyone who crawled their way into this forum from the DeEp LeArNiNg community somehow realized something is fundementally wrong. It would be fruitful if someone said “you know what, counterfactuals are always left behind, a true model must be holding a type of counterfactual somewhere!” I would be happy to add it into “the list”.

Exactly, I said “what should the titles of this kind of document even be?” What research topics needs to be parallelized? What you mentioned are the classical titles of cognitive science, a valid point. And which topics can be considered as the (non)linear combination of the all others is an important analysis?
I can’t, for example, ask this question to Bengio, he would probably say “vectors” and I would say “bye”. I think gathering information on these vast amount of subjects are hard. But trying to explain some phenomena by others (a little reductism) is fun.
Also I am okay for Koko the gorilla level system. One may say, “development through the womb is necessary, humans learn the first causal relation in the womb by kicking the mother and hearing the response”. A valid argument. These are all “what makes a human human” type of conditions (necessary or sufficient).
After asking all questions in all levels you can prepare yourself a chart of “what must be done”. The full connectome of the brain? Maybe.
But in the end a roadmap is better than nothing. I am sure Numenta has a roadmap. I watch their YT channel. They talk about the implementational level. Maybe they have opinions about the other layers as well. Maybe this helps them (I am open to work pro bono, Numenta :smiley: ).

1 Like

Where did you re-post this entry, @SeanOConnor? I was about to :white_heart: it.

Interesting idea, I always liked the concept of hashing and thought in high dimensional space (of 10.000 synapses) something like hashing must be implemented SOMEWHERE (I don’t know where). It sounds like what a neuron would do in terms of a nonsynaptic plasticity mechanism. Thank you, sir!

Hello,

I’m working on my own project: NEUWON.
The purpose of this project is to create a framework for making simulations of the brain.
Disclaimer: it is a work in progress, but i certainly hope it does not take 10 more years to complete.

Here are some details about it:

  • Writing computer simulations is easy, writing high quality simulations is difficult, and using graphics cards is even harder.
    So I made a framework for implementing simulations, which is (mostly) agnostic to the type of thing being simulated.

  • Then using my new framework, I’m working on implementing a simulation of neurons, dendrites, synapses, etc.
    The simulator has two parts: Mechanisms and Diffusion/Electricity.

    • Mechanisms represent chemical reactions or other molecular changes.
    • Chemical diffusion & Electricity are the primary means of communication between different mechanisms and between different areas of the brain.

    This design forces all of the mechanisms to use biologically realistic inputs and outputs, even if internally they are implemented in an unrealistic way.
    And it allows you to more easily mix and match different mechanisms because they’re all using a common interface.

3 Likes

@Falco
I dropped it because I have to focus 1000% on dropshipping.

This would make a great deal more sense if someone (anyone?) had a working AI so you know what you were trying to do. For that matter, the field is littered with projects that started well enough but most of them crashed and burned on the combinatorial wall.

Reading the history you may notice a fairly common theme: good definition, a good set of assumptions on what needed to be done, well-planned execution, good implementation, failed in the “scale up” phase. This stuff is harder than the experimenters thought it would be.

I do find it endlessly amusing to hear people talk about what an AI will or won’t do as nobody has ever made one.

2 Likes

A significant problem lies with definitions, many of which have been coined by philosophers who couldn’t write a line of code if their lives depended on it. In this instance, what do you mean by AI? I have a bunch of them scattered around the house.

Me (speaking) “Echo, are you conscious?”
Alexa (replying) “I know who I am.”

If that scenario was shown to me when I was a grad student in 1980 I would have said “Praise God! We have attained machine sentience!”

Well, we all know that did not happen. What did happen is impressive, but all of it has been driven by business models (Helloooo Numenta) and not science. Oh, and science isn’t off the hook–it is the source of all the confusion.

2 Likes

Agreed. Over many years I’ve seen it go like this:

  1. A computer can’t do X. X requires human intelligence, or AI.
  2. Oh, we found a way to get a computer to do X.
  3. That’s not AI, and a computer still can’t do Y.

Rinse and repeat.The take home is we’ll keep on chipping away and eventually we’ll wake up with ‘real’ AI. I don’t buy it.

My defintion of ‘true’ AI is all about being general or adaptive, at the same level as a wild animal running a maze or persuading an experimenter to part with food.

Think about it: a self-driving car equipped with a credit card and a few scripted dialogues could survive indefinitely in a city and even make a living as a taxi with less AI than your average chimp or raven.

3 Likes

I am aware, that’s why I called it “proposals”, I didn’t imagine the source code will be appearing in front of my eye as I speak.

I am talking about what AI SHOULD do, a philosophical question: what is necessary and sufficient? I don’t know, but people must be aware of some grand scale phenomena (something like they observed in their kids development for example) which can be reduced to a set of implementable local rules, common across every activity humans do. That’s my big assumption.
Truth about this can only reveal itself (necessary condition) if we talk about it. So I want to talk about what is to be human in every scale. How do people model themselves? How do you model yourself for example? If you are self-aware (starts around 9mo, matures around 2yo), you must have modeled yourself, right? The modeler’s model of the modeler himself is a good place to start imo. Even if it is wrong, it must be a honest mistake.

I disagree.

The AI problem will be solved by science and engineering, not by conversation and introspection.

AI is a property of every animal with a cortex (and many birds too). Build software that can run a maze or push a lever for food as well as a lab animal and you have AI.

2 Likes

Well all science problems are solved through looking at, analyzing it and communication about the target of the scientific inquiry. In this particular case, the subject of inquiry being our own minds, how do you expect the science/engineering would work by looking elsewhere than towards the target we-re investigating and communicating about something else other than the same target?

1 Like

Yeah, we know that we model every-thing and if you think about it, would be really really stupid to leave out unmodeled the single most important thing out of all other things, you know. That is already obvious. The problem is the modelling technology we try to replicate is still very much a black box. We hear some noises, see some light bulbs turning on and off… and of course we experience it popping out models of every other thing. Now what?

1 Like

This isn’t like most science. No one will ever observe an “object recognition”. Neurons just do whatever they do, which people choose how they interpret. We shouldn’t solve the most basic, common sense interpretations like they’re scientific facts.

We need to apply science etc. to conversation and introspection. Like it or not, that’s what solving the AI problem is. I just think we should be as clear as possible about what we’re trying to make.

I think we need ideas about that, but then we need to check if we were right based on the brain. We can’t really confirm that sort of thing about intelligence as a whole, but that can be done for more specific brain functions like object recognition.

1 Like

Any observation in any type of scientific experiment technically is ALWAYS just a correlation data if you don’t pose a theoretical model that explains those observations in the end. And there is always a chance that your observations are true, but your model is wrong; since there is always another causal link that explains your data (likely or unlikely). So sharing models is technically what you always do and what you always can do in the end (aside from sharing the raw data).
Neuroscience is producing excellent results for YEARS. There is a wealth of observations, sitting there, they just need to be a model of causality, and it can only happen if you, the model creator (the intelligent being) start explaining them. We do it by language. So everything, from macro to micro level, is the subject of this, and must be the subject of this process.
Considering the fact that the ideas that you don’t share and don’t scrutinize by the eyes of others can always stay as muddy fragment of ideas, what we need is observe observe observe and talk talk talk talk talk. In this talk I wanted to ask people what are the necessary and sufficient conditions of being a human. I think the method is right.

1 Like

No, the essence of science is experimental data and a theory to explain and predict it. The essence of engineering is reliable science and practical technique. Introspection and conversation give you neither.

Agree, a fruitful conversation is the one that proposes the next scientific test’s setup, and predict its findings.

1 Like

Then you will never solve it. Period.

Can you clarify what you mean?