Ok I get it I’m the emotions guy… here’s why… your network can’t really learn other than by patterning unless you build a judgement component into it. The emotions serve this purpose for better or worse in the human brain. in this paper- HTMs, Time Locking, and Neural Oscillation (Brain Waves) you may find the reason why we learn.
I do not suggest that you try to build a computer with emotions but rather that you consider the value judgement afforded to any decision that emotions bring. An extra dimension? If I understand it correctly boosting and inhibition are supposed to serve these functions but I think these ideas about whether to boost or inhibit come from emotions… and that emotional state data is mapped onto memory to help in making value judgement in the future. SEEKING is a drive based on emotional state and it naturally leads to a PLAY drive provided that the machinery to PLAY actually exists. Your model does not include a play. The seeking is handled by the temporal pooling but the play …where memories are compared against similar memories to return a novel result is not accounted for. what if one memory that was similar to another memory was compared and the result was fed forward to make a prediction… would that not aggregate the information from both memories into one prediction and then if it were born out would it not boost the new memory? It seems to me from observations (No data) that the amygdala and the neocortex share things weighted disproportionately…the Amg seems to carry heavy weight in NC and vice versa. That way if NC says seen it, passes the message to Amg and Amg says ok I guess it must be true… conversely if Amg says I’ve seen this it passes a weighted verdict to NC. Am I making sense?q
Dunno. Sorta thought I was in this post:
Emotions are concepts in the neocortex in exactly the same way as the concepts like chair, cat, money or red. Ref: Prof. Lisa Feldman Barrett’s fabulous new book "How Emotions are Made’.
The neocortex predicts the future (and remembers the past) via all eight senses - vision, touch, smell, taste, audition, balance, proprioception and interoception. All those senses are mapped onto the neocortex.
The ‘judgement component’ you are talking about is not emotions per se, it is using a prediction of an emotion concept (from all senses) in the neocortex to send signals to the interoceptive system for body balance - i.e balance arousal, heart rate, pleasant and unpleasant feelings.
Yes I believe if you look at what I said you will see that’s what I was saying…and it is an excellent book.
I think if you look at the linked paper you will see what I’m talking about
I don’t blame anybody for our widely pervasive inaccuracy; it’s an inherited misperception… but…
Saying, “Human beings have emotions”, is like going to a restaurant and eating the menu!!!
First, an analogy: (Football game in a stadium)
Coordination; statistics; skill; training; heart; passion; preparation etc., are all things the sportscaster may comment ABOUT what is happening on the field. Said more technically, these are all statements made in the conceptual domain. Concepts represent a thing, but they are not the thing itself.
Running is a description of an action. But… When you say, “…running”, you are not actually running.
Now…
There is a whole field of study regarding words that are not actually in the conceptual domain, but in the ontological domain. (Ontology is the study of being). Words such as “promise”, when used in a sentence can be actual actions.
For instance: "I promise to try my best to confront consensus understanding (risking mounds of automaton “Earth is flat”, Galileo-style retaliation), regarding distinctions we take for granted which are really misconceptions.
If I were to make that statement, I would be actually doing something; and, not just talking about something. It would be a statement made from the ontological domain. I would actually be promising something. The word “promise” in that statement is a speech-act.
Back to my analogy.
When I look down on the football field, I don’t see any skill or coordination (if there is any, try pointing to it?). Now, that is not to say that there is no value in seeing things from the conceptual domain! It deepens appreciation; makes things exciting and is fun to listen to - BUT - in most cases it has ZERO impact on the quality of play on the field.
Now, the coach also talks. But the coach has learned to talk from a very special domain. When the coach talks, he/she can actually have an impact on future behavior. The coach can (but not always), speak from the generative domain. But that’s an entirely different conversation - I just thought I’d mention that domain.
Back to emotions…
Almost everything human beings have to say after the word “is” - is a lie. (Even that statement). Why? Because we are taking something from the Ontological domain and putting it in the Conceptual domain. “I am hungry” (no, I am not the phenomenon hunger); “She is smart” (no, she is not the concept intelligence) etc…
Why am I risking life and limb to make this distinction? Well because if we go forward thinking that we need to recreate emotions in our artificially intelligent algorithms - we need to be very clear about what we’re trying to reproduce, or we’re going to waste a lot of time!
It may lend insight to speak about emotions from the conceptual domain, but we need to identify it as exactly what it is - and not misperceive it to be “real” or Ontologically bound - as if we need to actually inject an emotion into something! We characterize the release of hormones and physical sensations and a repertoire of behaviors as “emotions” - but to what extent are they actually there?
Hell, when we talk about having a stomach ache - most people couldn’t point to their actual stomach organs to save their lives!
Not saying that our consensus reality regarding the things we “know” are bad or wrong. Just saying that we have to have more rigor with these things if we are actually going to reproduce intelligence, maybe?
Please discuss?
Emotions are a way we talk about our motivational system. A condensed shorthand. I am scared of that dog. Is a quick way to say the last time I walked by that dog it bite me and it hurt and I do not want that to happen again so I will avoid the dog.
From the link I posted above:
In the Rita Carter book “Mapping the mind” chapter four starts out with Elliot, a man that was unable to feel emotion due to the corresponding emotional response areas being inactivated due to a tumor removal. Without this emotional coloring, he was unable to judge anything as good or bad and was unable to select the actions appropriate to the situation. He was otherwise of normal intelligence.
Emotional coloring drives the decision-making process.
We may be able to “reason” and “think” but in the end - we make good decisions by picking “good” things and avoiding “bad” things.
In the movie Serenity a population of humans has their emotions turned off by chemical means. Then they just sit and starve to death as it is as logical an action as any other.
This is the problem with Spock in Star Trek. Logic tells you nothing about meaning.
Yes this is what I have been on about…seeking is a drive which controlled by emotions. As mentioned in the quoted paper there are seven of these drives which seem to be able to circumvent the conscious mind and directly initiate behaviour. I think what I have been proposing is that these drives can also help to drive conscious learning as well. Emotions, or feelings are the complex semantic representation of an attitude towards an object or event…I believe seeking and play drives initiate thinking in the NC…a kind of active sifting through the puzzle pieces to see what fits with what. Fear limits behavioural interactions to focus attention on passive data streams…sights and sounds. Boredom is a feeling or mood (emotional state) intended to get th NC to sort of leaf through the photo albums LTMs to see if any of them kind of go together…If something returns a hit seeking takes over and we start to learn.
Most of the other drives don’t interest me that much as i don’t think they are that useful in learning, they seem more for self preservation and propagation…I am puzzled though why we would group panic and grief together as one drive? I think moods are predictive emotional states intended to colour or bias the patterning of information based on previous experience with similar things. I think all living things have process for generating a seeking drive and that as they evolve the seeking drive generates increasingmy complex behaviour, culminating with internal cortical behaviours which make the so called intuitive leaps.
Agreed
You are all right in various ways…a mountain of evidence without a theory that connects it all…sound familiar? In order to make predictions the NC needs input from the older structures…otherwise your HTM is only patterning based on the observable data and not what the observable will become if I do this…whatever that happens to be at the moment you decide or involuntarily do it.
@Bitking Maybe he was unable to “care” about the good/bad-ness of things? I doubt I need to feel anything to know that certain things are “bad” and certain things are “good”? If I see a terrorist act, I think I could classify it without and before, feeling anything?
I’m wondering whether an artificial intelligence would require emotions in order to bias states or provide a motivation to learn? Human beings sometimes require motivation and discipline because they have the capacity for apathy or laziness. An artificial intelligence however, wouldn’t be constrained by this?
There will have to be some sort of bias to drive decisions.
Without the pairing of body sensations of chemical messengers it may not work the same as emotions. It will still be good/bad but from and “alien” point of view. This could land us right smack in the middle of the uncanny valley. I worry if making an AGI that is so alien that you can’t understand where it is coming from will be an insurmountable problem. (motivation)
There is already an ongoing problem in social media where the recipient does not see the intent of the sender and badly misunderstands what is being said. Emoticons are supposed to be the cure but they really don’t help that much.
On the plus side - there is considerable progress in emotional AI field; perhaps the AGI will do an acceptable job in faking it.
Perverse Instantiation could be a problem, but I think an AI should only be in an “advisory” position and not have access to directly impacting the future of Humanity without at least consulting a human. (That’s regarding AGI’s tasked toward improving the human condition).
For those AGI’s not tasked to impact the human condition, I really think won’t have cause to negatively impact humans. When’s the last time (since childhood maybe), you went out of your way to step on an ant? It’s just not of interest because an AI could live anywhere in the universe and won’t be bound to our planet. I think a super-AGI would just be benignly indifferent to us…
Perhaps SEEKING and PLAY are the emotional drives needed as they help to encourage the formation of more informationally dense representations? If I am seeking something and I hear or see something of interest I play with this new information by involving more of my senses to create a more informationally dense representation which will in turn be more searchable and have more points of correlation in making comparisons to future streams of new data. As an example in your hot gym simulation the data streams are the energy output of the gym machines, the time of day, day of the week, and these can be patterned to make predictions about whether it is a weekend or a weekday, what if the room temperature, the smell, the towel use, the auditory level were added in to the data stream through a PLAY drive to seek more information to incorporate into the SDR, Would this make the SDR more searchable or comparable?
I’m wondering whether an artificial intelligence would require emotions in order to bias states or provide a motivation to learn? Human beings sometimes require motivation and discipline because they have the capacity for apathy or laziness. An artificial intelligence however, wouldn’t be constrained by this?
Yes I see your point…but I think the emotional state is used in humans to bias us towards active or passive learning…that is, using short range senses to provide additional information density in non threatening, active learning…and long range senses in passive (potentially threatening) situations. I believe this learning bias has repercussions for non behaviour based learning which is merely an unfortunate side effect of emotional learning control.
Would we necessarily include this artifact in our artificial models? Are these useful to non-biological entities?
I’m sorry, I think you provided an explanation above… Sorry I didn’t see that you had written two responses…
I agree that adding additional dimensions (emotional content) can deepen the distinctiveness of representations for sure! Is there a cost you think though in including these states? Couldn’t they get “messy”?
Wow hard to say! I think fear and rage are definitely dangerous…is grief useful in making a machine empathetic? Lust is all about propagation, do we want these machines to multiply?..I’m kind of glad I am only looking at the human side of this…I feel like on the human side I am not likely to make things any worse by helping people to unravel how emotional decision making works.
I think it’s also important to distinguish between drives and moods as well. I can see boredom as kind of useful in allowing the mind to wander and sort of flip through memories and compare them to one another to see if something pops. Sort of like how a DJ flips through an old crate of records to come up with a remix. Frustration as a mood might initiate a kind of random but more aggressive similar behaviour intended to get less predictable results? Large ethical implications to making emotional machines…
My focus right now is on creating more information density as people learn. When I teach plant identification plants which are the easiest to remember are the ones with higher density, like ones you can taste smell or touch, or the ones that have great stories or are distinct in some major way. If a student refuses the tactile or olfactory opportunities the recall is much lower. If a student is in an emotional state (fearful or depressed, unhappy) which suppresses the behaviour to engage with these things then recall or memory formation seems diminished. Connecting learning to prior experience through analogy seems to work pretty well, provided that the prior learning is actually rich enough.