Discussion about Emotions

An AI does not need emotions it needs a system of motivations. Which in humans we call emotions. No need to call it that in AIs we can stick with the dispassionate “system of motivations”.

1 Like

Agreed …makes me much less fearful…however unraveling the emotional piece in humans might help to understand what the useful motivations are! Probably a good idea to consider the unintended consequences of motivations in designing them…keep the good …get rid of the bad. It seems long-term self preservation drives are for the most part good short-term drives lead often to the bad.
Self driving cars are a good example if everyone merging used a longterm motivation what’s in the interest of the common good (everyone gets where they need to in the most efficient way) vs short term motivation what’s going to get me there first (No one goes anywhere because no one is playing by the same rules or with the same information). We poor humans then perceive traffic as a threat so we make driving decisions based on fear or rage.

I now see this post with a clarity I previously did not… I suppose we should make some sort of a catalogue of motivations? And do we want to teach these systems what their motivations should be or do we want them to discover them?

Been done:

http://catalogue.pearsoned.co.uk/samplechapter/0132431564.pdf

2 Likes

I figured as much as soon as I wrote it but I figured someone like you would be able to serve it up…so thank you I now have some good reading ahead!

If emotions are a survival aid shortcut (a heuristic?) that has evolved due to our high decision latency, wouldn’t they ultimately be unnecessary once we can implement intelligence in hardware we are able to optimise?

In computer/software engineering, when hardware is a bottleneck we have the choice of dumbing down the algorithm or optimising/scaling the hardware. Once we figure out the intelligence algorithm, there are bound to be situations where we are faced with this choice.

I think you are missing the point.
The perception of emotion is analogous to the perception of color.

If you say that something is red or green colored you can decide to do things based on color.
How would you program around “color” and deal with visual perception?

By the same token, you can color perception of things with good or bad. Once you can see that you can extend that to how will you deal with decisions without a tag for “good” or “bad?” Looking at the larger catalog of emotions I just posted you can see how that would feed into decisions. How do you propose to “code around this?”

1 Like

I don’t propose that …I propose that many of those emotions or motivations are irrelevant to a machine so rather what are the relevant ones in the kinds of contexts we wish to apply AI to. I have to deal with all of them…you just have to figure out which ones are important to just learning.
I think drives exist to preserve life, preserve stability of life support systems, and propagate life by passing on the informational code, in that order. The primary drives to preserve our own lives make us nasty, the drives which preserve stability make us cooperative, and the drives that make us propagate make us…well …there are probably a lot of quirks of human nature tied up in that that we wouldn’t want in a learning system either.

Why limit the AGI to “just learning?”

Looking at this list of “emotions” you can imagine that bundling some of this in the action selection process could make a far more sophisticated machine. I can see that you could make a machine that “likes” music or art. I can see a much better human companion; perhaps even a skilled therapist.

You may want to keep a lid on the rage/anger bits.

I agree completely…I think our latest posts got out of synch…have a look. I think AI could and should include much of the diagram but I think much of the diagram evolves from basic learning principles…it doesn’t need to be hard coded. It is learned over time…wisdom? In education we seem to have the reverse problem of neuroscience…we have a mountain of theory with some evidence but the catch is the evidence is data that is skewed in favour of political agendas. When some new trend is spotted in the data everyone piles on and the theories proliferate. What’s needed is a theory which unifies…hence my quest. I think many of the things in the chart you posted are relevant but many are suited more to self preservation and propagation as biology needs to dictate for obvious reasons, namely you and I are debating what it means to learn. I admit I have not fully digested all that’s in the papers you sent me however I find this discourse far more time efficient as I inevitably seem to be coming to the same conclusions before they are fully digested.

What about zipfs law and the 80 20 rule? I guess what I’m wondering is there might be some basic decision making algorithm which operated on this principle in some way… the decisions could be made increasingly more complex if th NC was used to put increasingly more complex information in front of this judge. However the NC has another trick up it’s sleeve as it passes the information down through a filter that puts an emotional bias on it to help the judge. I’m not sure how the 80 20 weighting works but I’m pretty sure it’s in there somewhere. Is it as simple as 80 self 20 common good… If traffic operated on the inverse principle wouldn’t it flow?

Context is king. It shapes what is possible to know as true.

If I’m walking down the street and a stranger slaps me in the face…
If I’m in martial arts class and a classmate slaps me in the face…

The exact same action, but TOTALLY different meanings. What the action CAN mean changes dependent on the context placed around it.

There was a baseball player that said every once in a while a pitch would come at him with homerun written on it. It wasn’t a conscious conclusion because at >90mph there is no time to “think”. But his “context” pulled him toward extraordinary results. Who he was before arriving at the plate, the context he was “being” - determined the potency of his actions.

An AGI maybe needs to arrive on the scene having its actions flow out of a given context of betterment of everything around it. Betterment of human lives. Betterment of environmental conditions, etc.

…but benignly without perverse instantiation. The context would be the comparative measure of feedback for its actions, and the guiding principle upon which to evolve future actions?

How about that?

1 Like

Wow …how do you code that and does 80 20 or some better ratio fit in? How is it coded in our brains and why don’t we have it coded the right way round? Not enough time for evolution?

I don’t know what you’re referring to by, “80 20” is there some reading you can point me to?

I didn’t make that stuff up. It comes from modern philosophers and human potential experts. But in that study, a human being can program themselves toward extraordinary action by first clearing themselves of hidden untruths - clearing themselves and realizing that there is no meaning to things inherently. What Zen masters call “centering”.

First creating nothing. (An AGI would have a head start because it wouldn’t have to deal with inherited truths and misnomers - it could start out “empty” of mind).

Then, because you can’t create from change - change requires keeping the old stuff around, because you have to keep the starting point around in order to know things have changed - you have to keep the old around to compare things to. This is why “change” is ineffective.

So first we have to be able to create “nothing” - then we can actually create on top of it, and not on an canvas that is already full of paint.

The prime directive or context for an AGI might be that which improves the substrate in which it exists?

1 Like

https://youtu.be/fCn8zs912OE this made the rounds a week ago… interesting for sure…connected I think…why else would it be so pervasive

1 Like

That’s awesome - still watching…

20 % novelty gets 80 % attention and vice versa …or 80% benefit to self vs 20% altruism or is it just 80 20 from the start just applied to more complex choices?

I think there is the realization that altruism and self-help are indistinguishable… In the same way as if I live in a neighborhood dominated by Honda’s and I go out and buy a Rolls Royce, I then have to have a garage and an alarm system and other protective measures to ensure the security of my disproportionately expensive property. So the more valuable something is compared to what others might have, the less freedom I’d have with it.

So freedom is inversely proportional to the pervasiveness of a thing. Therefore, real prosperity can’t exist in an isolated condition because its appreciation gets constrained inversely according to its conspicuousness.

So in order to experience true prosperity - everyone around us must be prosperous as well… so altruism is maybe actually the way to benefit one’s self?

Could be…maybe long term benefit vs short-term gain? I don’t disagree…still it seems so many problems can be solved if we put our selves second to ourselves… the penalty for to little is much higher than the penalty for to much…from a biological standpoint. There has to be some simple priciple at work here that works it’s way up to the neocortex…otherwise how does a frog or a stink bug or a phytoplankton do it.

I’ve heard someone here (was it Jeff Hawkins?) that there is movement toward a beneficial gradient. In what direction is there more robust opportunity for beneficial environment and food?