10-15 Year Plan to Artifical Human Intelligence? Do you have any system proposals?

I think we can explain them in the level of "look, this neuron fired, our model predicts the mouse must be planning to turn to left corridor, look this happened, our model says it will turn to the other one " etc etc. As in, finding a group of cells and correlating it with behavior.
But I can agree that a system and these circuit models have a big difference. TBT is trying to fill the gaps, but are those all the gaps? Maybe you observed something unique, that others were blind to, that MUST be explained by the current model and if it can’t explain it, that means the model is wrong. And you can keep scrutinizing the model and adjusting the model (by grounding to past papers, of course, and future experiment settings). That means we need MANY scrutinizer phenomenon, a list of acceptance tests, in every level. The more the marrier.

How do you build a theory, if you excuse me? How do you design experiments? Can you show me ONE example in the whole history of humanity where a theory and an experiment didn’t require ANY introspection AND any conversation? Please?
Reminding you any “thinking out loud” or “inner voice” is both introspection and conversation at the same time. Even children, without the language acquisition, design their experiments using introspection since the model of how the world works is inside your brain and your cognition uses the world model to predict (and action to update its configuration) so that it can predict an outcome for the senses. If those predictions match the senses every time, your model is right; if you are surprised, the model is wrong. Experiment always require “what is my model?” then you package your model in the zip of language and share it with other people so that they can use it. Always, always, whatever you do it is introspection and conversation. Always.

1 Like

Before creating mechanisms, I think we need to know how the brain goes about doing things. “Design intelligence” is a pretty stupid prompt for a STEM topic.

Models can help explain how the brain does things, but even if they work, they’re not proof. Correlations are problematic, because brain functioning is intricate, but even more so because of that stupid prompt.

As a quick example, I used to interpret projections to the spinal cord as motor output. The cortex actually targets the sensory part of the spinal cord too, not just the motor part. That leads to things like distinguishing self-motion from motor commands. My notions about motor control were too vague. They probably still are.

Here’s a longer example of how correlations could cause problems. Attention is a vague common sense notion. Let’s say you think a class of neurons responds to attended stimuli. Somehow, you have an experiment which determines whether the animal attends a stimulus. The neurons correlate very well with attention.

That doesn’t necessarily mean they’re involved in attention, although it points in that direction. For example, they might participate in a sensory submodality with attention-worthy properties. Even if they participate in attention, that doesn’t mean they represent attentionally filtered stimuli. For example, they might be involved in surprise. If so, it could be for behavior, perceptual detection, brain state, etc. They could also only be tangentially related to attention, or involved in a mechanism which doesn’t perfectly match the notion of attention. For example, attending something could mean anchoring locations to it.

Would you be willing to soften your viewpoint to include a scientific investigation into how introspection works and what it can and cannot offer into understanding mental processes?
Keep in mind that this could include invasive instrumentation that finds neural correlates to various mental states.

Understandable, any sensory neuron can use the motor commands to improve its prediction. V1 takes motor input as well. That’s an excellent observation about perception-action loops. Probably the exact ratio is in the connectomes and not something we can analytically work out.

Agree, long road ahead, this is an entry for the “Pitfalls of the Correlation” title for example :smiley:

1 Like

I’m not sure L5 is sending motor commands to sensory spinal cord. It probably does, but it could be other things, like attention / detection or modelling self-motion.

1 Like

Can you show me an example of anything built with introspection and conversation and no experimental data or technology?

It ain’t going to happen. The AI problem is one of science and engineering and those of us who get that will do the work and find the answers where others are still just navel gazing and chatting.

All thinking is “introspection”, it’s just a matter of depth and breadth. Problem is, introspection of most people is quite shallow, and it’s not any deeper for most coders. No, @Bitking, I don’t think previous attempts were based on good definitions, I think they were garbage. A good definition should have scalability built in. Anyway, here is my proposal, top-to-bottom: http://www.cognitivealgorithm.info/. Just in case there some one-in-a-billion deep-introspection guy here.

4 Likes

If clever mechanisms do something neat and are consistent with the brain, I’m sure the brain breaks problems into pieces the same way as you do.

That’s bs. It’s not bad to think about what intelligence is. That’s bs on its own of course, but not once it meshes with neuroscience facts, or when it points out stealth bs.

1 Like

Not so. Not even close.

" Introspection is the examination of one’s own conscious thoughts and feelings"

The thinking process in science and engineering is the examination of the external physical world and the principles on which it operates. It is objective, and explicitly excludes one’s own thoughts and feelings.

All of your knowledge of external world is internalized, by definition. “You” is a sum of your knowledge. Some of that knowledge is value-charged, that’s your feelings. Consciousness has nothing to do with it. Objective / subjective is shades of grey. Emotions: curiousity is the driver, the rest gets in the way.

1 Like

Even when you’re trying create a brain-like AI? Brains think. They also perceive. I can do introspections about what I perceive. It’s not like animal experiments never consider the animal’s perceptions. Often, they’re basically just conscious perception too (perceptual stuff reported by behavior.)

1 Like

I didn’t say no experiment, I said:

there MUST be experiments before, I can reevaluate them, I can suggest them. But somewhere along the line while you try to come up with a unifying model, you have to specialize in the engineering area where you read many papers, design models, argue with every other engineers to the teeth, agree on something, predict outcomes, try to the prove predictive power so that the actual scientist go and do additional experiments to see if that predictive power can be exercised in the real world. I meant we mostly argue for anything and everything and do small simulations, scientists do lots of experiment and discuss the result in one last paragraph, so why not complement the 2?

Yes, exclude the subjective experience to reach the objective truth… so that you can reach the subjective experience back where it no longer exists? Illogical. You can’t do an experience machine - an animal - by averaging over the experiences of the animals that you want to reverse-engineer.
I am not saying the lies people say themselves matter, they don’t. I am saying with the cross examinations of those experiences, you can find common denominators and (non)linearly independent phenomenon that can be turned into symbols, which then can be communicated, which then can find other examples in other experience machines and you can create a confidence about their existence, which then guide you to look for them in experiments. If you don’t have introspection and conversation, YOU CANNOT design experiments and YOU CANNOT evaluate them. Every time you do something you use your beliefs about the unobservable parts of the universe.

Development of the wheel… sometimes some things just dont require an explanation or deep thought. They create an activation sequence that joins up areas that then cascade into the “Eureka” moment…

I’m not quite sure you have a concept as to how the cortex works (along with all of us here, me includeded) as any activation sequence through the cortex is what you are labelling “introspection” because any activation of prior learning of any type is effectively what your labelling introspection. Without any of this sequence activation we would revert to animal instinct behaviour dominated by the “old brain” areas and likely just fight each other to be Alpha rather than create anything new that is not needed for basic survival or procreation.

Agree with the depth, instinct (Eureka) moments are less of a recursive process and is not what I would call introspection because the activation sequence is different and not as long, more like a wider cascading event. Iinstincts can also be different because they can be more heavily influenced by activations outside of the cortex (e.g. body language micro expressions, dodging an object thrown at you).

With the current GPU approach I think the majority of “use” is sort of like teaching someone about a hammer and giving them nails to do a job, then later giving them a handfull of screws and then wonder why the screws were hammered into the wall.
I have seen (and inadvertantly hired) developers who write some horrendously inefficient code that they see as fine because it’s running on leadeing edge hardware, get’s results and they don’t see a problem with it. Scale is what breaks “all” code at some point. AI is quite a few magnitudes different in the scale problem from what the majority of developers can deal with.

My view on this is a little different because I believe (conjecture open to being flamed) that we only need to develop a certain level of intelligence that is a subset of human capabilities which would then evolve the next generation. Trying to create a human replica as an AI is like asking Henry Ford to have made a car with legs. It can be done but is going to take way longer to achieve and be way less efficient.

Just my thoughts and matches to the heap of combustable material…

But the eureka is an introspection, OHHHHH I get it, when you people mean “introspection” you mean just the term “meta-cognition”, without how it actually works, when I say “introspection” I mean “thinking about thinking”, I include the second “thinking” as it is the input of the process and the output of that process is the invention. Continuing with your example, when you see in the nature that more “circular” rocks travel further on the dirt, you remember this experience (perception & memory), you see this many times and create an iconic associative memory to recall faster (world modeling/cognition). That is the thinking part, it happens in the moment of observation, in the sleep (or in a seperate introspection of your memory, whatever, not the point). When you are after an easier method of locomotion/carrying (because you get tired, introspection #2) etc, you try to generalize using your past experiences (the meta-cognition starts here), you remember somethings were moving easier in the past (easy=further is another analogy, introspection #3, but not the point either), you try to pinpoint why this “behavior” occurs on the things with circular contours, your hypothesis is that in all the experiences that you could remember the outer contour of the rocks were circular, maybe if you force something else to be circular, it will behave the same! Your experiment setup is: a rock, probably brute force and some dirt. It worked! Now you imagine yourself on it (imagination), so in a similar way you say: “I am big, it is small, I can’t be on it, it must be big too”. You made the first wheel! But is it enough to be an invention? No! You can’t place yourself on it so that it can carry you like your mother carried you (insprospection #4). It needs to have another one, like 2 feet (another analogy)! Didn’t work, do it 4 wheel, like other animals, another analogy!
Hence eureka moments cannot happen without you, other people saw the same thing as well, other people had the same memory as you do probably, but you tried to find patterns, weed out the unimportant ones focus on certain features in your memory, when 1 wheel wasn’t working YOU made modifications, not others. Clearly in your brain a brain storming was going on. And that’s pure introspection.

I explained my model above. Above some things are perception, some things are action, some things are cognition, some things are memory, and the cognition using the past cognition is meta-cognition which I call “introspection” (since it was my own thoughts, my “previous” working memory, not the current binding state but the previous ones, the consciously accessed ones).

Weird drifting away examples, but ok.

Not true AT ALL, you clearly didn’t have enough courage to think about scaling. Von Neumann architecture and 100% safe message passing areTHE PROBLEM, reading a memory in order and the clock are the problem. Having instruction sets elsewhere and the existence of error conditions are the problem, writing object oriented code is the problem. If you have async processes all trying to predict and encode the local information without clock, using their local memory, local message, no error conditions if something slipped away, no object oriented approach. Add a couple of slow regulatory hormones. That is the nature’s way. 86B neurons living happily ever after.

Nothing evolves with the content of its binding state on this Earth. It becomes obsolete so fast, you don’t ever need your ancestors memories (also don’t forget that the binding state is closely related with the configuration of synapses, which is not passed down as well). You need their local learning rules and neural architectures and regulatory processes. And if you think we are not using every capability of our brains, every network, every circuit in any task, you are dead wrong. Circuits gets silenced to have you one clear working memory, you use your body representation all the time, your memory is a gate to future, your predictions effect your emotions, you choose actions based on their utility. Every ability is used all the time, and evolotion only selects the rules, not the content. Every binding state in every animal will be literally shit before “passing to next generation”. This is not Assassin’s Creed. This is nature, you can’t adapt to new conditions if your bowl is already full (that’s why even individuals’ deaths are a part of the humanity’s cognitive process).

I agree but for different a reason: that once anyone uses biology to actually make money then it becomes a lot easier for outsiders to invest in this technology. And neuroscience does not need to accomplish very much, it just needs to compete with backpropagation in order to redirect the funding and the hype away from those old unrealisic AIs and towards the study of biological intelligence.

Let us assume for discussion that the HTM/TBT is correct as it stands to describe the function of the cortical fabric.
Is anyone here moving up to the hierarchy and sub-cortical connections?

2 Likes

Not mine.

But I’ve wondered about the idea of using genetic algorithms to search the parameter space for HTM configurations or hierarchy arrangement.

One of the challenges would be determining the virtual world, and what is considered a ‘success’. For this fella’s implementation, he simply define certain regions so that his creatures would evolve to migrate to those survival regions, thus passing on their genes.

I might go for doing this myself, but at least wanting to float the idea here.

2 Likes

At long last. At least 500 years of science and 10,000 of engineering tell us sitting on your bum and introspecting butter no parsnips, you have to do the experiments and create the theories and build the models. Nothing else works.

You may have subjective experiences, but they are not experimental data and they are worthless to science. There is a path, but it does not go this way.