Yes, if a system can generate new memories without deleting old ones, forgetting is unnecessary.
Forgetting is definitely necessary to efficiently store memories and recalling.
Forgetting could emphasizing critical memories. For example when trying to solve a math problem, a person wish more relative solutions came out. However, if a person remember also the environment he solved the problem before, that memory could interfere with current task. Like when solve what is the sum of 1+1, the color of pen, the lighting of house, and the texture of table occupy one’s mind instead of how to do a summation.
I think emotion could influence memory storage. Memory then contribute to most intelligence tasks. Emotion (maybe neural transmitters) as reward and synapse strength as frequency.
I haven’t done massive reading on this yet but I do agree some form of
is important. I don’t know whether
is erasure of the memory or just weakening it. But I have chosen weakening for now and might tweak that later.
I don’t disagree emotion could influence but for my objective I do not believe emotion is a necessity. I believe intelligence can exist without emotion.
I think it as a label on experience. It is useful when calculating the maximum expected reward and making analogs between two experiences.
Is this really what you are doing when you stand in front of the refrigerator and “shop” for a snack?
Maximum reward?
I don’t know how human brain works in that case, but maximum expected reward could provide a decision. Let’s find experiences with a hungry states. Then there is a set of experiences of getting foods from refrigerators and shop for a snacks. Then every events in those experience have some value associated with punishment and reward. Like snack in the market tastes better than refrigerators. Times that reward with a weight that reflects the frequency of making such decision. Then we could get a maximum expected reward in this case.
However to make no calculation next time, there is a need to build direct connections between hungry and responses. I believe there are more than one decision making system. At least one that has comparisons to make a decision that leads to a maximum expected reward, and another direct reflect the last calculation which give a quick response.
I assume you are still referring the role of emotion in intelligence. Maybe I will use this as a case study to see if we can arrive to the same conclusion about emotion whether it is a necessity or nice to have as a role in intelligence. If I learn 1 + 2 = 3, then 2 + 1 = 3, then along with other associations, I later on am able to derive and conclude 1 + 1 + 1 = 3 without have priors on 1 + 1 + 1, I don’t believe this has to involve emotion (although emotion could potentially speed up or delay the learning process). And from this, I think i can safely conclude emotion is not a necessity but a nice to have. And particularly adding emotion will drastically complicate my objective to build a software simulation of the most basic biological brain with the most basic form of intelligence.
Humans don’t form the concept of number until rather late in the learning set. The early math facts about numbers is shape and pattern recognition. The shapes of groups of things. Math facts are sequences of objects. It is only much much later that you learn symbolic manipulation. By that time you have learned the representation of objects and relationship of objects in the real world. Math flow from this and not the other way around. One of the common tests of cognitive developemnt is “more” and “less.”
So - your agent says one plus one equals 9 and your teacher say bad, wrong answer, try again.
You feel bad because you did not please your teacher so you learn that there are good and bad answers.
Why is 1+1=2 a good answer and 1+1=3 a bad answer?
The learning was not some math fact but reinforcing your social role in seeing the markers of happy teacher. A bad answer gives bad social reinforcement and a good answer give positive social reinforcement. A good answer motivates you to continue with the whole pointless affair of reciting useless math facts (and only certain ones!) instead of the much more useful activities of eating or drinking.
A real AGI will have to have drives and motivations. Right and wrong will have some reinforcement values.
Objects in the environment will have salience. Why are some objects good or bad?
How do you intend to code for hand in fire bad? And correct answer is good?
I propose that this coding of good or bad (and shades of why it is good or bad) is a key part of learning every object and relation that forms a memory. If you don’t use my method of coding good and bad feeling about everything how WILL you add this salience?
Please do not offer some sort of logical reasoning as your method. Even critters with what we consider to have very low intelligence don’t do some sort of logical puzzle as they walk around looking for food and fleeing predators. They like things and fear things. Fear happens to be the most basic as it is also the safest view to deal with something unknown.
Any useful reinforcement learning will have some notion of a good or bad answer. You don’t have to call it happy and sad but that it the answer that nature came up with to label good and bad. You will end up emulating this - why not cut to the chase and call it emotion in the first place. Once you do that you can look to how nature uses this reinforcement learning - it clearly evolved this for a useful purpose.
Please note that understanding social roles has been an important goal for robots for a long time - how can you have a machine that interacts with humans if you don’t understand the markers of approval and disapproval?
Thanks for putting this with easy examples!
I agree with your answer, except on your use of emotion-related words.
For instance, I suggest a slight rewording of those sentences:
They approach/do things and escape/avoid things. Escape happens to be the most basic as it is also the safest view to deal with something unknown.
why not cut to the chase and call it drive/affect in the first place
“Fear” is a human emotion/feeling, and we have no objective reason to project it on other creatures. If critters have emotions, they develop their own that should have their own dedicated word. However, it is reasonable to speculate that critters have affects/drives but don’t have emotion.
Maybe you will say that it is only vocabulary. Right, but I think it is useful. This distinction is advertised by Lisa Barrett Feldman and Joseph Ledoux in their respective recent books (if I haven’t misunderstood their explanations!)
NB: This remark doesn’t change the thrust of your answer on which I agree!
You are addressing half of what I am describing. Your clarifying that by reducing it to the clinical “drive/affect” makes that clear.
The “other” part is the embedded judgement. This is what is missing from current AI attempts like deep learning and symbolic reasoning systems. The common sense that everyone points to as the failing of deep learning.
Yes, your memory contains patterns and transition of patterns. But there is more. In the intersection between cortex and the sub-cortical command and control center is the vital HC/EC complex. This is where your feelings intersect with your experience. In this encoding center the output of your limbic system - your feelings about an experience - is combined with the what and where of the experience. EVERYTHING that you experience! This blending of experience and how you feel about the outcome of the buffered experience is running 24/7. Everything you experience gets a grade. It is not just good or bad, emotion has multiple dimensions. During recall and mental operations this coloring has weight in your deliberations.
In effect, the judgement is built right into the memory. The recall of an object brings the judgement right with it - there does not have to be any logical reasoning from first principles. The combination of objects in mental manipulation has this weighing built in so you tend to make good judgement without any reasoning at all.
I get that some people think that the brain is like a symbolic computer, working along the lines proposed by Gary Marcus. I say that building the logic of good and bad in the coding fed to the network is how the brain does it with connectionism so I side with Yoshua Bengio!
It’s not just a symbolic house, it’s (pattern & relationship & value weighting) bricks too!
Machines which have emotion/free will are dangerous and uncontrollable. This may be why some people worry that AGI is a dangerous thing. Machine like that is not AGI, it is artificial human being.
What should AGI look like? - Machines provide solutions, and humans make choices/judgments.
I don’t need a machine to have all the common sense of human beings. In fact, the common sense of different people are not the same (such as the hero in the film Rain Man). Emotion is not in the laundry list unless you want to include it for fun.
So - no judgement in your AGI?
No idea if something is a good or bad idea?
A wrong or impractical answer is just as good as a right answer? If I say take out the trash think of ALL the bad responses that fill that request. It boggles the mind in all the ways that could turn out badly.
When you think emotion I think that you are thinking anger, greed, and hate - and maybe love and other, more positive things. These are cartoon ideas that have been promoted by bad sci-fi writing.
I am thinking that there is some cluster of judgements that are stored along with objects and sequences. They don’t have to be human but they should be a utility function stored with the objects and actions.
Yes, this is so true. Humans are dangerous and uncontrollable.
However, if you can make a Rottweiler then you could also make a Golden Retriever.
yeah, for fun.
I think I see what you mean here, robots don’t need human emotions to do my homework. In industrial settings there are clearly defined tasks which should motivate robots.
However, if your robot interacts with the world at large then its going to need to know how to deal with people or else people will exploit it.
which means that the machine already has self-awareness. I don’t know how many people will choose to make such a robot if they already have such capabilities.
If we could make a psychologist robot without additional problems, we would of course make it.
I don’t want to participate in discussions around self-awareness :D.
One thing I have been learning is that, as a novice in the field, I continue to repeat the same mistake to use unfit example to illustrate my point. And thanks for pointing that out. Learning calculation is definitely too advanced and high level to be related to my objective.
For that reason, I hope you are ok for me to change to another example and see if I can further understand if emotion is a necessity or a nice-to-have in intelligence. But before I proceed, I want to emphasize again that
is never my intention at this stage and I have emphasized before that I am NOT building human intelligence. I just want to build a software simulation of the most basic biological intelligence for now.
So instead of calculation, I would like to use object recognition and differentiation to find out if emotion is a necessity in intelligence. If we place an orange in front of a newborn with intelligence (not necessary human newborn) who has no prior model of an orange, then we take it away and place a different orange in front of the newborn again, with my limited knowledge I believe the newborn can learn and conclude that the second orange is similar to the first orange without even knowing it is an orange. And if we replace the orange with an apple, I believe the newborn can learn and conclude that the apple is not the same as the orange before. And I believe this is all accomplished without any type of social interactions with anyone telling them right or wrong and without any bad social reinforcement. But yet the intelligence can learn and adapt and conclude similar and different objects. The motive behind the learning is about minimizing prediction error (or as Friston stated as free energy principle).
And even with your argument back on the calculation side, my interpretation is emotion can assist learning, but not necessary the fact that learning cannot occur without emotion. Let’s say if someone taught me 1+1=2 then 1+1+1=3 and then someone tells me 1+2=4. Without anyone telling me good or bad and without any social reinforcing, my mind will continuously tell me something doesn’t match up and will continuously derive information until I can align with my prediction.
Please don’t get me wrong. I do think emotion plays a part in the high order intelligence. Just thinking of how our brain can interpret something funny in certain context but not another is a highly complicated process. I just do not believe emotion is an essential part of very basic intelligence (again, my objective is NOT about human intelligence)
So you are making simple object recognition a marker for intelligence.
Does that make the face recognizer in my Canon camera intelligent?
It can spot faces with a variety of presentations and scales.