Discussion about Emotions

@rhyolight - Any thoughts on this?

I’m not looking at in isolation…just not at the same resolution. I see something I try something. I have some stories to tell…I need some researchers to verify. You have much knowledge but the granularity is too high for me… I’m trying to keep up but our goals are different. I was hoping you could tell me if the idea fits with brain theory in a general way…assuming a feed forward feed backward system between NC limbic and AMG

In the system I just outlined - your sensory cortex learns stuff. Your need sensors (the limbic system) riffs on the sensory systems to drive attention and activity.

Which need sensor wins depends on where you are in a “Maslow space” and your prior experience which shapes your perception of the environment.

Better?

1 Like

Thanks…now I have the motivation to sift through all that you’ve given me…I’ll take it as a yes as far as pursuing…the line of thought. I really do appreciate the detail and breadth of knowledge …far more time efficient than sifting through journals randomly…you could say you’re my own AI. I only need to get close to understanding and if I try a few things in the classroom and they work then I’ll go from there. I trust that the biology makes it happen in humans… I don’t think it’s a teaching or a learning problem but a communication problem…Thanks

I really liked the way you said that @Bitking…the limbic system riffs…do you see the different parts of the cortex riffing on one another too? Could that be how the sdr is formed? I guess I’m just having a problem with understanding the structure here…are the hexagons doing the riffing to produce the chord… and then does the chord then get riffed with other chords?

On a different note…a few key takeaways for the psychology fans.

Emotions were, and are, the semantic representations of the language we used to communicate before there was language.

This emotional communication still exists today but we humans largely are illiterate in it and as a result we automatically filter it out, or grossly misinterpret it except for some very explicit situations.

The emotional dialogue can be understood if both participants are honest and open…and they use words to Express things more precisely.

I think the above explains a lot about us, and our percieved limitations.

In a nutshell:
The SDRs are the little pattern each dendrite senses. One cell may have dozens of dendrites - each is keyed to record a few patterns. Please note that these patterns are local to the cell - they only extend as far as the dendrite arbor can reach. For cells in the cortex that range is between 0.3 mm and 3.0 mm.
Each column (a cluster of 100 or so cells) may sense hundreds or thousands of little patterns local to the column. The grid is the structures that tie these little pattern sensors together into a larger pattern on a single map. Grid spacing is about 0.5 mm.

The entire cortex after unfolding is about 1000 mm x 1000 mm so the individual SDRs don’t cover very much of the brain. The brain is thought to be composed of about 100 areas of local processing; doing the math gives each area about 100 mm x 100 mm area. Since the reach of individual cells is small even in relation to the smaller maps the grid structure gives a mechanism where the cells can work together to recognize a pattern that covers the larger area of an area map.

Not all areas work to form these larger patterns - some work to refine a local pattern.

If you compare a grid forming area to one that is not grid-forming I see the primary difference as whether or not the layer II neurons have reciprocal connections at 0.5 mm spacing. Without these connections, there is still competitive action to recognize a local pattern but no influence to extend that pattern beyond the local area.

And yes - I think these larger patterns do a riff on each other in the cortex.

@gmirey - you may find this bit interesting.

3 Likes


If this topic interests, you don’t forget to check out the links in the article. I found them very informative.

I think I’m starting to get it in a global sense now…man is it elegant. The way things kind of layer one on top of another.

1 Like

Totally makes sense … could these perceived preferences have more to do with which cortical regions are at higher states of development at that particular moment (when the vark was taken) because of the context of the individual. Example two learners, one learner spent a great deal of time outside playing in the woods, the other spent a great deal of time playing video games. Should it really be a surprise that the outdoor kid has a more developed motor cortex and the indoor kid has a better ability with his/her visual cortex, but it doesn’t mean that the other regions can’t develop just that those particular regions are further along at that moment because they got used more. The unsupervised learning creates the context…since the teacher was not present during the unsupervised learning the teacher doesn’t know the context.

Thanks for those thoughts. I still need lots of info about this kind of stuff. The reciprocal thing is indeed one of the questions I had. Is it known by direct experiment that whole areas have such reciprocity and some others don’t, or is it only inferred from the preferred receptacle areas for grid cells against others ?

I somehow have difficulty to find precise diagrams or descriptions showing both axonal connectivity “rules” together with dendrite extent. Even the very “look” of axonal branching is quite blurry to me. Some vulgarization source would say “generally a single axon per cell” without telling anything of its branching abilities at the tip, while others would present it as an already quite well stocked tree. My ability to derive anything certain for topological interactions is thus quite impaired there.

But now that I’m writing it, I believe guys at, for example, Blue Brain would have data for most of that stuff… ? Dunno where to look in fact.

https://qph.fs.quoracdn.net/main-qimg-7d4a5bfd9fee5c4b67eba86503c39209

Looks like the axon has some arbor. Maybe for depressing the neighbors?

2 Likes

See page 48 and diagram on top of page 51.

Diagram:

The best papers on this are behind paywalls or in books.

1 Like

Thanks @Ed_Pell and awesome material @bitking, as usual.

Interesting. I think emotions serve the purpose to give directions to and bring about behavior as required. Regarding emotions being the judgement factor, I think, the system is capable of judging information on the basis of its function even without emotions and emotions help directing further actions (of course, emotions in general play a wider role). Tagging patterns with emotions and so further remembering what to do when such a situations occur again is another function. In this case, the judgement is already done and emotions are just a tag and a behavioral paradigm.

I would say to a large extent. Emotions are not just purely abstract concepts and must be groups of high order patterns including certain activation patterns from different cortical areas. It is established that abstract emotions are the creation of the motor cortex and hence they must be pretty elaborate(in what they encapsulate), even though might feel vague to us.

I think emotions can serve as a grounding for intelligent machines about how to behave with people or other objects of interest. This has wide applications. A person doesn’t hurt a child even if the child ends up hurting them because of emotional analysis, among other factors. If we consider how we evolved as social beings, then this sort of behavior is must. Likewise an AI can be programmed to be gentle towards new objects instead of randomly choosing to do whatever it likes(this example is really bad but I hope the idea behind it is conveyed properly).

All meaning can be logically understood. This seems like an understatement.

Emotions can serve as motivations as well as behavioral paradigms for safety and learning. The nature of the manifestations of human emotions is certainly not the only way emotions can manifest. If we talk about emotions as the abstractions that they are which are tagged with physical and physiological responses and patterns, then one can think of the human emotional system as dispassionate as well. You cannot separate motivation from emotional coding just by directing a system to do something by hardwiring it.

Agreed. But they can serve can useful tools for self-supervision of the AI.

I agree. This opens up a lot of possibilities. One can hardwire and choose the abstract types of patterns to which the machine will response emotionally(and this need not be survival related).

Context here is special since a pattern can bring about certain emotional state which is contradicted by the emotional state brought about by the context. Context is important, but the analysis of current patterns and the resulting emotions(you wouldn’t want to slap the person in the martial arts class just because he too knows martial arts) affect the decisions as much as context does.

I think this questions boils down to whether emotional states and their functional significance are important for a self-aware intelligence. And because we humans are the only highly self-aware organisms which evolved from emotion-based circuits, it is difficult to neglect their importance.

But they lack the power to exert force. Maturity is proportional to influence in that manner. Their system probably wants everything it likes because they are in a better survival position that way(I can see this becoming vague but not completely)

Our brains probably are in that stage.

One can argue that’s something more than learning itself.

I don’t think this is the case. One can learn graphical symbols without spelling them out or saying them and articulate using those symbols. But this limits the applications and one would probably need good visual memory.

I disagree completely. There is a post regarding this but my point is that more working space will eventually overpower the need for symbolic reduction.

In what way is it a considerable form of communication? Emotional behavior of another person lets us understand the kind of perception and decision making that might be happening in their minds and what their response is to certain situation. But whatever communication happens this way and in related ways(like speaking out of emotional states) is something that doesn’t really convey essential information other than just the emotional state. And the rest of it is something that is conveyed because it was generated due to the prevalence of that particular emotional state.

Good point, but I personally think that it is our own fault and we are not really to blame our monkey brains for behaving like a monkey. The neocortex seems to fit in here.

Like humans, lots of mutations boiling down to the stable ones(or erupting like humans), I think.

Your explanation before this is interesting, but I will have to read a lot more in order to grasp things entirely. Please elaborate this particular mechanism.

2 Likes

Can you be more specific? which idea or phrase so I can try to explain. There is too much there to restate the whole thing.

Regarding the needs of the frontal lobe and their processing and the interaction of limbic system and cortex.

1 Like

There is much to digest above which I will need some time to do… I think though that in simple terms we are realizing that to learn more complex things we need the right motivation to learn them and that motivation stems from the perceived utility of what the learning will be.

We seem to learn best and fastest when we predict something…apply a process based on that prediction and the outcome is what we predicted. Emotions predict, through moods, the motivations and drives which are most likely to be useful in what’s about to happen. If the prediction comes true we remember what worked…if the prediction does not come true we have to either forget everything…or consciously remember the parts that we believe…predict might have utility in the future.

Teachers take note…the job is to point out this utility so the learner can discover their motivation. Another thing that has been bothering me is that the pareto principle seems to suggest that only small amounts of what could be learned is actually remembered…I think in unsupervised learning this means that we only remember what works…the emotions seem to be telling us what works…sometimes in error. We can remember what didn’t work but I think only if we make a conscious effort to do so. In other words we forget things which are scary or sad unless we can see the utility in remembering them in a conscious way.

My emotions may get me geared up for a fight by predicting a faster heart rate, imagining how good it feels to win, and predicting the kind of aggression it will take to do that to another person. If I win, that aggressive behavior is patterned as a winning solution to a fight…if I lose though…now I need to forget the pain and humiliation of having lost…otherwise it wouldn’t take too many of these encounters before I was no longer a functioning person. I consciously decide to remember who I lost to though because there is great utility in keeping this knowledge in terms of avoiding future interactions… The next time I see this person, my emotions tell me to heat up for a fight but logic tells me to avoid the fight because I consciously predict the outcome without emotionally predicting all the painful details.

1 Like

Emotions help us to sort out what to remember and what to forget. Happy emotions automatically form memories unhappy ones inhibit memory formation and really unhappy ones make us consciously digest what just happened to make some useful meaning out of the situation. This way little unhappinesses are forgotten but big traumas form memories. Fairly efficient way to save storage space…to figure out what needs keeping before keeping anything longterm. I would bet this kind of emotional bias is actually applied right at the sensory input level to sharpen some senses and blunt others in anticipation of what’s about to happen. Does the winning boxer in a fight actually remember much of the pain his opponent inflicted…or is it more about agression and the motor memory of the punches and combinations. Come to think of it…if the winner was almost certain of victory and his opening combo gave him reason reinforce the belief…could the brain respond by blunting pain perception in anticipation that the fight would be painless…a way of compressing the overall data aggregate of of an event by predicting what doesn’t need to be part of the picture, before you even see what you’re painting. A way of establishing how the input space needs to be set up and the proposed resolution for different data streams being done in anticipation of what we predict the useful picture will be.

Assuming this theory of emotions, their cause, and use is correct, how would we go about implementing this in hardware/software? How could we closely model it on biology? How does a system then self-classify what is (simplistically) good/bad?

I suspect this basic challenge is what we’d need to figure out before we could even think to implement the Three Laws.

1 Like

Another example…if most of the fights I get into I win, not because of what I hear from my opponent but because of what I say and do, can I predict that hearing and decoding language will not be useful the next time. If I did that then I could devote less input space to it the next time, and as a result, have a diminished capacity to be talked out of anything. …kind of explains stubbornness and probably a lot of other human quirks.