Brain Building - Q1. Define Intelligence

yeah, for fun.

I think I see what you mean here, robots don’t need human emotions to do my homework. In industrial settings there are clearly defined tasks which should motivate robots.

However, if your robot interacts with the world at large then its going to need to know how to deal with people or else people will exploit it.

3 Likes

which means that the machine already has self-awareness. I don’t know how many people will choose to make such a robot if they already have such capabilities.
If we could make a psychologist robot without additional problems, we would of course make it.
I don’t want to participate in discussions around self-awareness :D.

One thing I have been learning is that, as a novice in the field, I continue to repeat the same mistake to use unfit example to illustrate my point. And thanks for pointing that out. Learning calculation is definitely too advanced and high level to be related to my objective.

For that reason, I hope you are ok for me to change to another example and see if I can further understand if emotion is a necessity or a nice-to-have in intelligence. But before I proceed, I want to emphasize again that

is never my intention at this stage and I have emphasized before that I am NOT building human intelligence. I just want to build a software simulation of the most basic biological intelligence for now.

So instead of calculation, I would like to use object recognition and differentiation to find out if emotion is a necessity in intelligence. If we place an orange in front of a newborn with intelligence (not necessary human newborn) who has no prior model of an orange, then we take it away and place a different orange in front of the newborn again, with my limited knowledge I believe the newborn can learn and conclude that the second orange is similar to the first orange without even knowing it is an orange. And if we replace the orange with an apple, I believe the newborn can learn and conclude that the apple is not the same as the orange before. And I believe this is all accomplished without any type of social interactions with anyone telling them right or wrong and without any bad social reinforcement. But yet the intelligence can learn and adapt and conclude similar and different objects. The motive behind the learning is about minimizing prediction error (or as Friston stated as free energy principle).

And even with your argument back on the calculation side, my interpretation is emotion can assist learning, but not necessary the fact that learning cannot occur without emotion. Let’s say if someone taught me 1+1=2 then 1+1+1=3 and then someone tells me 1+2=4. Without anyone telling me good or bad and without any social reinforcing, my mind will continuously tell me something doesn’t match up and will continuously derive information until I can align with my prediction.

Please don’t get me wrong. I do think emotion plays a part in the high order intelligence. Just thinking of how our brain can interpret something funny in certain context but not another is a highly complicated process. I just do not believe emotion is an essential part of very basic intelligence (again, my objective is NOT about human intelligence)

So you are making simple object recognition a marker for intelligence.
Does that make the face recognizer in my Canon camera intelligent?
It can spot faces with a variety of presentations and scales.

1 Like

Following up on you simplifying learning as something that itself is not a complex high level activity. At the same time as you are putting fruit in front of the baby it is looking all over at everything. That fruit object is just one of the constellation of objects in the environment. Her eyes are being driven by subcortical structures to scan the shapes of objects. To segment the visual field into objects; the cluster of features that make up an object. The cortical structures that hold an object distribute the features over many maps and coding and that decoding is itself a fairly complex activity.

Continuing on the drive/effect trail and learning:

1 Like

Just to clarify first I never described object recognition as “simple” because I do not believe it is simple even though it is so intuitive to us. Also I am not making it a marker for intelligence either because I don’t know what the marker is yet. With my exercise I have been thinking a lot how I can drive to the bottom to eliminate as much as possible to hopefully arrive to the bottom layer and build it upward. For that I have been quite focusing on what brain in the early development can do when it is from a blank slate and I think object recognition and differentiation could be one.

If your canon camera has no prior built in to recognize face, and able to use the same approach to recognize oranges, apples, chairs, dogs and understand the difference to be able to make decision to differentiate those from others, and use the same mechanism to deal with sound (without any prior built in again) then yes I do believe your canon camera is expressing a sign of intelligence. And more importantly for my objective if the underlying mechanism of your canon camera is based on the biological design with growing neurons and synapses to drive the learning and understanding and decision making I would even say it is expressing a simulated biological intelligence. If not then I would believe we are discriminating this agent from being intelligent. Wouldn’t you agree?

I don’t believe I ever suggest recognizing object is not a complex high level activity. I am merely suggesting handling mathematics calculation would be something too advanced for what my current objective would like to achieve. Many biological species can recognize objects but not many can handle mathematics.

I have followed your posting but I still do not get how emotion was involved in recognizing and differentiating object visually. What would be considered as the positive and the negative reinforcement in the process of recognizing objects? If it is a first time an infant (not human necessarily) look at an apple and then the apple was rotated and then replaced with an orange the infant would be able to recognize the same apple even rotated and differentiate that from an orange. I don’t see where the positive or negative reinforcement or how emotion is involved in the process. Unless my understanding of emotion is somewhat different than what most experienced neuroscientists have.

There is built-in face recognition coupled with pleasure in humans. This seem to reside in the limbic system; you know - the “emotional” part?
Yes.
Just like my camera, face recognition is built into humans. As is the shape and fear of dangerous animal. I think that this genetic gift is built into most of the animal kingdom. So your requirement that the camera learns faces is not really a good one. It already knows, just like most critters.

The mechanism that point the eyes, deciding WHAT to look at and HOW to scan it is almost completely subcortical. You experience what you subcortex decides you will look at. The subcortex has a bunch of built in salience filters that pick things based on stuff like motion and novelty. Once all that is settled you get to remember whatever it picked. Those scanning algorithms are mostly hard-coded in very old brain structures. You don’t learn voluntary control until much later in the development process. And just to make sure that we lay the whole “object as a picture” thing to rest - you remember a basket of features - not a picture. These features are scattered over many levels of the cortical hierarchy.

You ask about the emotion part of perception.

Here is something for you to ponder.
Why does the infant bother to look at anything at all?

Why do humans play and explore. What do they get out of it. What is that subcortex up to in the first place? Is there pleasure in exploring? Why does the infant look at the object? Why the apple and not just the table it is sitting one. Why does the novelty of the apple and orange draw the attention in the first place?

One of the most basic human behaviors to seek out novelty in controlled doses. In HTM theory we say that a difference in perception and memory triggers bursting, or learning. Put another way - we say that surprise is the trigger of learning. When something is familiar we don’t pay much attention to it. The brain is structured around learning everything around it - we are learning machines and the whole process is mostly automatic.

Connections from the maps back to the limbic system are part of what triggers exploring behavior. If you think of the various maps/areas in the cortical hierarchy there is some innate desire to put something in those maps - to explore and by exploring, to add things to those storage areas. The end effect of this is that maps “want” to burst - to be filled with new information.

We clever humans have learned to present small batches of information that fill as many maps as possible with about as much as you can learn in one day. See school schedules for examples. For this to work you have to see personal relevance so you will be interested. Without this this there is poor attention and learning. The sleep mechanism consolidates this new information so you are ready for more.

Exploration is one of the built in behaviors. Learning is rewarded with a feeling of satisfaction.

There is survival value in this playing and exploring - to add useful behaviors and knowledge of the environment to be the stuff of adaptive behavior later. To know where the food and shelter necessary for survival are to be found and how to use them. To know about the social structures that are part of being a social animal. These are built in drives. Exploring and playing bring pleasure. Exploring with your eyes and learning bring pleasure.

Being shut off from exploring brings pain - the essence of punishment by incarceration. If you think of the featureless gray walls in prison - this is all part of the punishment.

3 Likes

I have had exchanges with many AI newbies on many occasions and on various levels of depth. Most seem to start out with some sort of “folk wisdom” about what intelligence is and some ideas about how it might work from introspection.

Almost every one of them starts out with some idea about what parts of the laundry list will make up a usable AI. I have yet to see one that has really sat down and though through what they really want or what they will get with the proposal they put forward. Most reject emotion and many downplay the critical nature of the command and control structures built into the older parts of the brain.

I went though much the same mental evolution path so it makes is a bit easier to see it when others are walking the path - It goes something like this:

I want a magic calculator that can do or solve anything.
You mean like excel? Won’t that take a lot of programming?

NO - I want it to have a powerful built in learning mechanism so it can program itself.
You mean like skynet?

NO - I want to put limits so it can only do good things?
You mean - if you make a mistake in programming it will gain self-control and because it really is powerful it will destroy us all!

NO - It won’t have any [fill in the blank - no sense of self, no emotion, no xxx] so it can’t runaway and kill us all.
Every one of the limits imposed really would not work. For example - without a sense of self - how will it interact? It has to have a marker for me in every interaction so it knows who you are talking to, it has to have a physical location to run end effectors, … Even Alexa has to know you are speaking to it so it knows that you want it to do something. A really smart AI will have a strong sense of self and self history or it will be very limited.

This goes on for several rounds but in most cases the person I am talking to does not want a person - they just want a magic genie that can do no wrong.

Looking to the only example of functioning human level intelligence we have (humans) we see that one of the critical features is strong socialization. When a baby is frustrated they are SO angry; ask any parent. If they had the capabilities of a fully grown human they would wreck great destruction on the source of their frustration - usually a fellow human. It’s a good thing we socialize humans before they get big and powerful. We have to build in a sense of right and wrong in the development process; what action are and are not acceptable. It has to be built in and reinforced all during the development process.

One very good piece of advice is not to try and interact with wild animals. They are NOT socialized and their behavior set does NOT include any bias against attacking humans. If the situation arises to defend against a human or harvest them for food - they do it. Animals don’t screw around; killing is very much part of the built in behaviors in the wild. You don’t want any powerful machine without this key feature of right and wrong.

I offered this definition of intelligence before and most people seem to think that is is too simplistic. I would invite you to consider it with an open mind and think of how it could be developed into a functioning AI.
Intelligence - the quality of the processing of the sensory stream (internal and external) that ties that stream to prior learning in useful ways. The end result is the selection of the best action for the perceived situation.

1 Like

Your definition of intelligence fits my definition of perception! :thinking:

Do you make a difference between intelligence and perception?

Yes - there is one key difference.
On the perception thing we are together.

The intelligence part is the selection of the best action.

We both see action as a part of the perception process.
Would that mean that intelligence is required for perception in your view?

Maybe the difference lies in the temporality of what we call actions:

  • Instantaneous actions (~100 ms alpha cycle) → perception
  • Planned actions (>100 ms from now) → intelligence? cognition?
2 Likes

After putting a lot of thought into how the Numenta word “predict” fits into a definition for intelligence a good test was what happens when used to describe the goal of the Jeopardy game, which is to correctly predict the question to a given answer.

It’s then a matter of in as few words as possible describing the required basic components in HTM Theory, Watson and the Heiserman Beta Class algorithm. I ended up with: a memory system to store confidence level(s) controlled motor action guesses.

In more detail is this I have for over a decade been tweaking:

Behavior from a system or a device qualifies as intelligent by meeting all four circuit requirements that are required for this ability, which are: (1) A body to control, either real or virtual, with motor muscle(s) including molecular actuators, motor proteins, speakers (linear actuator), write to a screen (arm actuation), motorized wheels (rotary actuator). It is possible for biological intelligence to lose control of body muscles needed for movement yet still be aware of what is happening around itself but this is a condition that makes it impossible to survive on its own and will normally soon perish. (2) Random Access Memory (RAM) addressed by its sensory sensors where each motor action and its associated confidence value are stored as separate data elements. (3) Confidence (central hedonic) system that increments the confidence level of successful motor actions and decrements the confidence value of actions that fail to meet immediate needs. (4) Ability to guess a new memory action when associated confidence level sufficiently decreases. For flagella powered cells a random guess response is designed into the motor system by the reversing of motor direction causing it to “tumble” towards a new heading.

For machine intelligence the IBM Watson system that won at Jeopardy qualifies as intelligent. Word combinations for hypotheses were guessed then tested against memory for confidence in each being a hypothesis that is true and whether confident enough in its best answer to push a button/buzzer. Watson controlled a speaker (linear actuator powered vocal system) and arm actuated muscles guiding a pen was simulated by an electric powered writing device.

Thousand Brain Theory distributes complex predictions among cortical columns, made of social-cells where there is differentiation into complex neural circuits. Even slime molds are surprisingly fast learners:

Social-cells are already excellent at learning to ahead of time properly respond to environmental stimuli, make predictions. HTM models code that into each cell, without needing to exactly know how the underlying cellular biology works.

I would judge the usefulness of a definition for intelligence by how quickly it has someone modeling a system that learns to make correct predictions, contains what is most needed for readers to try it for themselves.

1 Like

Sounds pragmatic :+1:

I put this one as the expression of intelligence. Without some behavior there is no way to determine if something is intelligent. As much as it might annoy some people on this forum, you have to judge expressed behavior in some way to determine how intelligent something is.

I had not considered the role of intelligence in active perception but that does make a great deal of sense - the ability to correctly parse a complex environment leading to selecting actions would also be some measure of intelligence. As you learn the features of the environment you should get better at understanding the environment and parse out the objects, actors, relationships, and predict the possible outcomes of actions.

This directly points to the deep intertwining of structure and content in intelligence. It’s not enough to have a structure capable of intelligent behavior - it has to be filled with useful data to complete the construction of intelligent behavior. Without this programming/learning it is no more intelligent than a common stone.

Capacity for intelligent behavior alone is not enough.

1 Like

Yes, there are a number of models to qualify as being “intelligent” (or not) where for a robotics beginner it’s David Heiserman, for someone who wants to input game show poetry and all that it’s a Watson, and for in-between very neuroscientific methods is HTM and associated Thousand Brains theory.

After accounting for Arnold Trehub’s simplified diagram for the human brain I get this comparison:

Above illustration is from: https://sites.google.com/site/intelligencedesignlab/home/ScientificMethod.pdf

What makes a Heiserman simple circuit/interaction able to produce complex behavior is what happens (temporally) over time, while racing from one timestep/thought to the next, in a direction that depends on what it was thinking about minutes or seconds ago. Action outputs become pulse/spike train signals for fine control of muscle forces. When everything is going well it’s switching high confidence motor signals back and forth to maintain balance at a given (circular or linear) navigational angle. One tiny thing that it has not yet experienced can cause a total change in what it is doing before going back to earlier task, or not, depending on whether it (normally used) starts off a new memory location with current motor actions for data, or guesses entirely new actions.

The model to demonstrate navigational behavior has to ahead of time predict motor actions needed to stay out of the way of an approaching shock zone that sends confidence in usual actions to an all time low, which causes reguessing that leads to avoidance of food while getting zapped. The virtual critter then gains the common sense to go around the zone then wait for the food to be in the clear. There are then dilemmas where wanting to do both conflicting things at some time causes amusing impatience, as in living animals. It will never win a genius level game show but this is an excellent test for navigational level intelligence. At the “intellectual” level are vocal motor actions and associated networks for “drawing a picture” of something we can in our mind navigate, predict from. We need both to talk and walk at the same time, not single thing controlling both.

More information is at:

Numenta has an interesting model that was easy to this way qualify as intelligent, which drew me to this forum. Adding “prediction” raised the bar another notch but it’s more like something that a properly functioning system I’m familiar with all together produces, not new thing that has to be added to an existing circuit diagram or algorithm. I want what I have for a definition to withing its given limits work with what others have, make sense to areas of cognitive science it applies to. It’s something I put a lot of thought into, I needed to share with you in this thread.

1 Like

Slightly related to the topic at hand:
https://www.nature.com/articles/s41598-019-53510-w

I would start with building something and show its tricks to a friend/s or some human/s and maybe judge this human’s reactions instead? I think this is a simpler solution without a long laundry list. Just my 2 cents I prefer this because I’m an engineer, start simple, evaluate results, iterate simple, evaluate results, and so on.

2 Likes

Cuz programs usually don’t surprise us if there’s no new idea. No need to code it.
I think OP should not lack programming skills. Actually, this is a very basic question. What is intelligence? What are we ganna do?

BTW, I saw something unbelievable upstairs, ha ha ha ha ha ha :laughing: