Brain Building - Q1. Define Intelligence

Is this really what you are doing when you stand in front of the refrigerator and “shop” for a snack?
Maximum reward?

1 Like

I don’t know how human brain works in that case, but maximum expected reward could provide a decision. Let’s find experiences with a hungry states. Then there is a set of experiences of getting foods from refrigerators and shop for a snacks. Then every events in those experience have some value associated with punishment and reward. Like snack in the market tastes better than refrigerators. Times that reward with a weight that reflects the frequency of making such decision. Then we could get a maximum expected reward in this case.

However to make no calculation next time, there is a need to build direct connections between hungry and responses. I believe there are more than one decision making system. At least one that has comparisons to make a decision that leads to a maximum expected reward, and another direct reflect the last calculation which give a quick response.

I assume you are still referring the role of emotion in intelligence. Maybe I will use this as a case study to see if we can arrive to the same conclusion about emotion whether it is a necessity or nice to have as a role in intelligence. If I learn 1 + 2 = 3, then 2 + 1 = 3, then along with other associations, I later on am able to derive and conclude 1 + 1 + 1 = 3 without have priors on 1 + 1 + 1, I don’t believe this has to involve emotion (although emotion could potentially speed up or delay the learning process). And from this, I think i can safely conclude emotion is not a necessity but a nice to have. And particularly adding emotion will drastically complicate my objective to build a software simulation of the most basic biological brain with the most basic form of intelligence.

Humans don’t form the concept of number until rather late in the learning set. The early math facts about numbers is shape and pattern recognition. The shapes of groups of things. Math facts are sequences of objects. It is only much much later that you learn symbolic manipulation. By that time you have learned the representation of objects and relationship of objects in the real world. Math flow from this and not the other way around. One of the common tests of cognitive developemnt is “more” and “less.”

So - your agent says one plus one equals 9 and your teacher say bad, wrong answer, try again.
You feel bad because you did not please your teacher so you learn that there are good and bad answers.
Why is 1+1=2 a good answer and 1+1=3 a bad answer?
The learning was not some math fact but reinforcing your social role in seeing the markers of happy teacher. A bad answer gives bad social reinforcement and a good answer give positive social reinforcement. A good answer motivates you to continue with the whole pointless affair of reciting useless math facts (and only certain ones!) instead of the much more useful activities of eating or drinking.

A real AGI will have to have drives and motivations. Right and wrong will have some reinforcement values.

Objects in the environment will have salience. Why are some objects good or bad?
How do you intend to code for hand in fire bad? And correct answer is good?

I propose that this coding of good or bad (and shades of why it is good or bad) is a key part of learning every object and relation that forms a memory. If you don’t use my method of coding good and bad feeling about everything how WILL you add this salience?

Please do not offer some sort of logical reasoning as your method. Even critters with what we consider to have very low intelligence don’t do some sort of logical puzzle as they walk around looking for food and fleeing predators. They like things and fear things. Fear happens to be the most basic as it is also the safest view to deal with something unknown.

Any useful reinforcement learning will have some notion of a good or bad answer. You don’t have to call it happy and sad but that it the answer that nature came up with to label good and bad. You will end up emulating this - why not cut to the chase and call it emotion in the first place. Once you do that you can look to how nature uses this reinforcement learning - it clearly evolved this for a useful purpose.

Please note that understanding social roles has been an important goal for robots for a long time - how can you have a machine that interacts with humans if you don’t understand the markers of approval and disapproval?

3 Likes

Thanks for putting this with easy examples!

I agree with your answer, except on your use of emotion-related words.
For instance, I suggest a slight rewording of those sentences:

They approach/do things and escape/avoid things. Escape happens to be the most basic as it is also the safest view to deal with something unknown.

why not cut to the chase and call it drive/affect in the first place


“Fear” is a human emotion/feeling, and we have no objective reason to project it on other creatures. If critters have emotions, they develop their own that should have their own dedicated word. However, it is reasonable to speculate that critters have affects/drives but don’t have emotion.

Maybe you will say that it is only vocabulary. Right, but I think it is useful. This distinction is advertised by Lisa Barrett Feldman and Joseph Ledoux in their respective recent books (if I haven’t misunderstood their explanations!)

NB: This remark doesn’t change the thrust of your answer on which I agree!

3 Likes

You are addressing half of what I am describing. Your clarifying that by reducing it to the clinical “drive/affect” makes that clear.

The “other” part is the embedded judgement. This is what is missing from current AI attempts like deep learning and symbolic reasoning systems. The common sense that everyone points to as the failing of deep learning.

Yes, your memory contains patterns and transition of patterns. But there is more. In the intersection between cortex and the sub-cortical command and control center is the vital HC/EC complex. This is where your feelings intersect with your experience. In this encoding center the output of your limbic system - your feelings about an experience - is combined with the what and where of the experience. EVERYTHING that you experience! This blending of experience and how you feel about the outcome of the buffered experience is running 24/7. Everything you experience gets a grade. It is not just good or bad, emotion has multiple dimensions. During recall and mental operations this coloring has weight in your deliberations.

In effect, the judgement is built right into the memory. The recall of an object brings the judgement right with it - there does not have to be any logical reasoning from first principles. The combination of objects in mental manipulation has this weighing built in so you tend to make good judgement without any reasoning at all.

I get that some people think that the brain is like a symbolic computer, working along the lines proposed by Gary Marcus. I say that building the logic of good and bad in the coding fed to the network is how the brain does it with connectionism so I side with Yoshua Bengio!

It’s not just a symbolic house, it’s (pattern & relationship & value weighting) bricks too!

1 Like

Machines which have emotion/free will are dangerous and uncontrollable. This may be why some people worry that AGI is a dangerous thing. Machine like that is not AGI, it is artificial human being.
What should AGI look like? - Machines provide solutions, and humans make choices/judgments.
I don’t need a machine to have all the common sense of human beings. In fact, the common sense of different people are not the same (such as the hero in the film Rain Man). Emotion is not in the laundry list unless you want to include it for fun.

So - no judgement in your AGI?
No idea if something is a good or bad idea?
A wrong or impractical answer is just as good as a right answer? If I say take out the trash think of ALL the bad responses that fill that request. It boggles the mind in all the ways that could turn out badly.

When you think emotion I think that you are thinking anger, greed, and hate - and maybe love and other, more positive things. These are cartoon ideas that have been promoted by bad sci-fi writing.

I am thinking that there is some cluster of judgements that are stored along with objects and sequences. They don’t have to be human but they should be a utility function stored with the objects and actions.

1 Like

Yes, this is so true. Humans are dangerous and uncontrollable.

However, if you can make a Rottweiler then you could also make a Golden Retriever.

1 Like

yeah, for fun.

I think I see what you mean here, robots don’t need human emotions to do my homework. In industrial settings there are clearly defined tasks which should motivate robots.

However, if your robot interacts with the world at large then its going to need to know how to deal with people or else people will exploit it.

3 Likes

which means that the machine already has self-awareness. I don’t know how many people will choose to make such a robot if they already have such capabilities.
If we could make a psychologist robot without additional problems, we would of course make it.
I don’t want to participate in discussions around self-awareness :D.

One thing I have been learning is that, as a novice in the field, I continue to repeat the same mistake to use unfit example to illustrate my point. And thanks for pointing that out. Learning calculation is definitely too advanced and high level to be related to my objective.

For that reason, I hope you are ok for me to change to another example and see if I can further understand if emotion is a necessity or a nice-to-have in intelligence. But before I proceed, I want to emphasize again that

is never my intention at this stage and I have emphasized before that I am NOT building human intelligence. I just want to build a software simulation of the most basic biological intelligence for now.

So instead of calculation, I would like to use object recognition and differentiation to find out if emotion is a necessity in intelligence. If we place an orange in front of a newborn with intelligence (not necessary human newborn) who has no prior model of an orange, then we take it away and place a different orange in front of the newborn again, with my limited knowledge I believe the newborn can learn and conclude that the second orange is similar to the first orange without even knowing it is an orange. And if we replace the orange with an apple, I believe the newborn can learn and conclude that the apple is not the same as the orange before. And I believe this is all accomplished without any type of social interactions with anyone telling them right or wrong and without any bad social reinforcement. But yet the intelligence can learn and adapt and conclude similar and different objects. The motive behind the learning is about minimizing prediction error (or as Friston stated as free energy principle).

And even with your argument back on the calculation side, my interpretation is emotion can assist learning, but not necessary the fact that learning cannot occur without emotion. Let’s say if someone taught me 1+1=2 then 1+1+1=3 and then someone tells me 1+2=4. Without anyone telling me good or bad and without any social reinforcing, my mind will continuously tell me something doesn’t match up and will continuously derive information until I can align with my prediction.

Please don’t get me wrong. I do think emotion plays a part in the high order intelligence. Just thinking of how our brain can interpret something funny in certain context but not another is a highly complicated process. I just do not believe emotion is an essential part of very basic intelligence (again, my objective is NOT about human intelligence)

So you are making simple object recognition a marker for intelligence.
Does that make the face recognizer in my Canon camera intelligent?
It can spot faces with a variety of presentations and scales.

1 Like

Following up on you simplifying learning as something that itself is not a complex high level activity. At the same time as you are putting fruit in front of the baby it is looking all over at everything. That fruit object is just one of the constellation of objects in the environment. Her eyes are being driven by subcortical structures to scan the shapes of objects. To segment the visual field into objects; the cluster of features that make up an object. The cortical structures that hold an object distribute the features over many maps and coding and that decoding is itself a fairly complex activity.

Continuing on the drive/effect trail and learning:

1 Like

Just to clarify first I never described object recognition as “simple” because I do not believe it is simple even though it is so intuitive to us. Also I am not making it a marker for intelligence either because I don’t know what the marker is yet. With my exercise I have been thinking a lot how I can drive to the bottom to eliminate as much as possible to hopefully arrive to the bottom layer and build it upward. For that I have been quite focusing on what brain in the early development can do when it is from a blank slate and I think object recognition and differentiation could be one.

If your canon camera has no prior built in to recognize face, and able to use the same approach to recognize oranges, apples, chairs, dogs and understand the difference to be able to make decision to differentiate those from others, and use the same mechanism to deal with sound (without any prior built in again) then yes I do believe your canon camera is expressing a sign of intelligence. And more importantly for my objective if the underlying mechanism of your canon camera is based on the biological design with growing neurons and synapses to drive the learning and understanding and decision making I would even say it is expressing a simulated biological intelligence. If not then I would believe we are discriminating this agent from being intelligent. Wouldn’t you agree?

I don’t believe I ever suggest recognizing object is not a complex high level activity. I am merely suggesting handling mathematics calculation would be something too advanced for what my current objective would like to achieve. Many biological species can recognize objects but not many can handle mathematics.

I have followed your posting but I still do not get how emotion was involved in recognizing and differentiating object visually. What would be considered as the positive and the negative reinforcement in the process of recognizing objects? If it is a first time an infant (not human necessarily) look at an apple and then the apple was rotated and then replaced with an orange the infant would be able to recognize the same apple even rotated and differentiate that from an orange. I don’t see where the positive or negative reinforcement or how emotion is involved in the process. Unless my understanding of emotion is somewhat different than what most experienced neuroscientists have.

There is built-in face recognition coupled with pleasure in humans. This seem to reside in the limbic system; you know - the “emotional” part?
Yes.
Just like my camera, face recognition is built into humans. As is the shape and fear of dangerous animal. I think that this genetic gift is built into most of the animal kingdom. So your requirement that the camera learns faces is not really a good one. It already knows, just like most critters.

The mechanism that point the eyes, deciding WHAT to look at and HOW to scan it is almost completely subcortical. You experience what you subcortex decides you will look at. The subcortex has a bunch of built in salience filters that pick things based on stuff like motion and novelty. Once all that is settled you get to remember whatever it picked. Those scanning algorithms are mostly hard-coded in very old brain structures. You don’t learn voluntary control until much later in the development process. And just to make sure that we lay the whole “object as a picture” thing to rest - you remember a basket of features - not a picture. These features are scattered over many levels of the cortical hierarchy.

You ask about the emotion part of perception.

Here is something for you to ponder.
Why does the infant bother to look at anything at all?

Why do humans play and explore. What do they get out of it. What is that subcortex up to in the first place? Is there pleasure in exploring? Why does the infant look at the object? Why the apple and not just the table it is sitting one. Why does the novelty of the apple and orange draw the attention in the first place?

One of the most basic human behaviors to seek out novelty in controlled doses. In HTM theory we say that a difference in perception and memory triggers bursting, or learning. Put another way - we say that surprise is the trigger of learning. When something is familiar we don’t pay much attention to it. The brain is structured around learning everything around it - we are learning machines and the whole process is mostly automatic.

Connections from the maps back to the limbic system are part of what triggers exploring behavior. If you think of the various maps/areas in the cortical hierarchy there is some innate desire to put something in those maps - to explore and by exploring, to add things to those storage areas. The end effect of this is that maps “want” to burst - to be filled with new information.

We clever humans have learned to present small batches of information that fill as many maps as possible with about as much as you can learn in one day. See school schedules for examples. For this to work you have to see personal relevance so you will be interested. Without this this there is poor attention and learning. The sleep mechanism consolidates this new information so you are ready for more.

Exploration is one of the built in behaviors. Learning is rewarded with a feeling of satisfaction.

There is survival value in this playing and exploring - to add useful behaviors and knowledge of the environment to be the stuff of adaptive behavior later. To know where the food and shelter necessary for survival are to be found and how to use them. To know about the social structures that are part of being a social animal. These are built in drives. Exploring and playing bring pleasure. Exploring with your eyes and learning bring pleasure.

Being shut off from exploring brings pain - the essence of punishment by incarceration. If you think of the featureless gray walls in prison - this is all part of the punishment.

3 Likes

I have had exchanges with many AI newbies on many occasions and on various levels of depth. Most seem to start out with some sort of “folk wisdom” about what intelligence is and some ideas about how it might work from introspection.

Almost every one of them starts out with some idea about what parts of the laundry list will make up a usable AI. I have yet to see one that has really sat down and though through what they really want or what they will get with the proposal they put forward. Most reject emotion and many downplay the critical nature of the command and control structures built into the older parts of the brain.

I went though much the same mental evolution path so it makes is a bit easier to see it when others are walking the path - It goes something like this:

I want a magic calculator that can do or solve anything.
You mean like excel? Won’t that take a lot of programming?

NO - I want it to have a powerful built in learning mechanism so it can program itself.
You mean like skynet?

NO - I want to put limits so it can only do good things?
You mean - if you make a mistake in programming it will gain self-control and because it really is powerful it will destroy us all!

NO - It won’t have any [fill in the blank - no sense of self, no emotion, no xxx] so it can’t runaway and kill us all.
Every one of the limits imposed really would not work. For example - without a sense of self - how will it interact? It has to have a marker for me in every interaction so it knows who you are talking to, it has to have a physical location to run end effectors, … Even Alexa has to know you are speaking to it so it knows that you want it to do something. A really smart AI will have a strong sense of self and self history or it will be very limited.

This goes on for several rounds but in most cases the person I am talking to does not want a person - they just want a magic genie that can do no wrong.

Looking to the only example of functioning human level intelligence we have (humans) we see that one of the critical features is strong socialization. When a baby is frustrated they are SO angry; ask any parent. If they had the capabilities of a fully grown human they would wreck great destruction on the source of their frustration - usually a fellow human. It’s a good thing we socialize humans before they get big and powerful. We have to build in a sense of right and wrong in the development process; what action are and are not acceptable. It has to be built in and reinforced all during the development process.

One very good piece of advice is not to try and interact with wild animals. They are NOT socialized and their behavior set does NOT include any bias against attacking humans. If the situation arises to defend against a human or harvest them for food - they do it. Animals don’t screw around; killing is very much part of the built in behaviors in the wild. You don’t want any powerful machine without this key feature of right and wrong.

I offered this definition of intelligence before and most people seem to think that is is too simplistic. I would invite you to consider it with an open mind and think of how it could be developed into a functioning AI.
Intelligence - the quality of the processing of the sensory stream (internal and external) that ties that stream to prior learning in useful ways. The end result is the selection of the best action for the perceived situation.

1 Like

Your definition of intelligence fits my definition of perception! :thinking:

Do you make a difference between intelligence and perception?