Action origination, old brain vs cortex

@Bear The thing stops the human from making progress in visual and language processing,
at least for me, is the fact that we don’t understand what is to “understand.”

You might be surprised to discover how much is actually known.

The brain is very complicated: this is a problem where the more you know the more you discover that you don’t know that you did not know. I have been doing this for decades and I am just coming to the point where I can see the hazy outlines of how it all works together and how the small parts have to work to support this model.

The parts that are known about the various sub-systems put contraints on how the connected sysems have to work. It is clear that much of what is missing in systems that should display “common sense” originate in the sub-cortical modules - an area that most current AI work ignores.

Please keep studying and feel free to ask about what is known. There are many people here that can point you to where some of the answers you need may be found.

3 Likes

It may originate there, just to speed things up. But then the cortex takes over the function, and performs it far more efficiently.

Not to argue - But I offer that the cortex is utterly unable to initiate action.
I would welcome any example you can offer where the cortex initiates action.

From what I have seen, it is purely responsive to input.
This input could be from the external senses OR another connected map OR the lizard-brain sub-cortical structures. Or some combination of these inputs.

Yes, the cortex is a wonderful processor of sensory input to give the lizard brain a version of the world it can understand, and yes, the cortex is capable of taking the orders of the lizard brain an elaborating that into very complex patterns - but the lizard brain is firmly in the driver’s seat.

1 Like

I don’t think that’s a problem. There is no point in initiating an action when you don’t even know what’s going on.

The senses convey what is going on to the lizard brain through the cortex. The cortex gets better at this over time.

Much the same way - the lizard brain starts out “babbling” though the forebrain. Watch any baby to see the process in action. The feedback through the sensory stream is used to allow the lizard to gain control of the body. In the beginning, the lizard controls “simple” things like feeding and forming social contacts.

Within a few months the cortex gets good enough at this that the lizard gets a very simplified version of what the senses are perceiving and issues very general commands that are turned into very detailed actions.

The is the essence of my dumb boss/smart advisor model.

2 Likes

That is what I am thinking about too
because otherwise animals like reptiles and fish won’t be able to initiate actions.:grinning:

2 Likes

I agree that this is how it works in both ontogeny and phylogeny. But we don’t need to repeat either one.
We already know where it all ends up, why not take a shortcut. As for master / slave, well, revolutions happen, progress goes on.

I have been following your work to make a fancy version of the standard edge finder algorithm and your general approach to extract a sanitized version of what the cortex does through layered action.

I have to ask - to what purpose? What it is you are trying to do? Is there a humiculus that will view it?

This pursuit of extracting some type of information from a static image (have you been working on saccades and I just missed it?) misses the bigger question of what will use this information? Any kind of calculation must have some overall purpose or function. Perhaps this feature extraction will be useful in it’s own right but I don’t see it playing much part in artificial intelligence work.

I say that the purpose of most of the cortex (the back, middle, and sides) is to feed the lizard brain what it needs to see to work. That is the basic processing that is performed. The big forebrain is to untangle the rantings of the lizard brain and turn that into useful actions. The two parts are strongly connected and work together for these purposes.

Once you view the processing that way many odd things about the brain make more sense.

Are there shortcuts? :thinking::thinking::thinking:

It’s fancy and sanitized because this simple kernel is supposed to be used recursively. SDR is a simple idea too, actually a lot simpler than mine. As I’ve been trying to explain, the purpose is scalable discovery of incrementally complex patterns. That’s the definition of learning, exploration, science. I feel this purpose is superior to feeding the lizard brain.

1 Like

We are so close to asking and answering the same question and not passing each other:

When you are done discovering the incrementally complex pattern - what do you do with it? What will make sense of this finely detailed model of the perception?

A huge problem in much of machine learning is sensor fusion and making sense of the river of data flowing into the sensors. None of these systems has any “common sense” even with the addition of a mound of heuristics and hand-tuning.

My take is that instead of making the data more detailed and complex, you go the other way and break it down to a bag of features. At the end of the process, the outcome of the process is something so simple that a lizard can understand it. There is a lot of data reduction that happens in the process but that is not important to the lizard - it knows what the body needs (its very special talent) and picks what it thinks is the best choice based on the simple menu that the cortex is offering.

Going back to your fancy processing - how do you see the processing being used in a larger system? I do this all the time in my functions as a system designer. What will use this data? How will it interpret this data? How will that result in the selection of actions? I think that these questions have to be addressed in the evaluation of the utility of the process.

2 Likes

It will never be done, not until the heat death, if any. Making sense is what it’s all about, and there is no need in some homunculus watching it.

“A huge problem in much of machine learning is sensor fusion and making sense of the river of data flowing into the sensors”

That’s what I meant by scalability.

Does the sense of “superior” come from the limbic system or frontal lobe?
Or somewhere else? Is this sense the reason for action, or the result of reasoning?
:thinking::thinking::thinking:

1 Like

Superior because cognition is most general-purpose instrument. All human motives: instincts, conditioned values, pure curiosity, are instrumental, otherwise they would not have evolved. The difference is in degree of their instrumental generality. More general instruments ultimately win over, by definition.

2 Likes

By god? or by humans? If it is by human how could we be the judge of our own standard?
How could we see the ultimate? Or is that the ultimate in the sight of human’s eyes?
:thinking::thinking::thinking:

Let’s try to stay on topic! :slight_smile:

5 Likes

By themselves, in my mind or in any other arena, of any scope. This is what I call meta-evolution, no judge or jury.

Sorry, my thinking is really divergent :woozy_face:

1 Like

That’s a separate question. It can be separated precisely because of instrumental generality of pattern discovery.

This flowery talk is all well and good.

I take your fancy kernel and feed it a picture of a Numenta cup with a plain background.
What secrets of the universe would you expect to come from that?
Will it be able to extract the shape? Purpose? Properties?
I can’t see how.

The lizard brain / cortex system may deduce that it is thirsty and reach to pick up the cup as an implement to address this need. Or that it needs comfort and again, reach for the familiar cup. Or be curious and think “OMG - is that a real Numenta cup!” and reach for it to examine the details of this fine vessel. I can’t say for sure as I don’t know the internal need state(s) of that agent.

What sort of setting do you expect that your model will be able to take an image and produce anything of value?