Discussion about Emotions

These connections are diffuse throughout the lower brain - as I intimated above the body needs are sensed and registered in the Hypothalamus. These Nuclei of Hypothalamus are strongly connected to other regions of the lizard brain and make it through to the cortex as second and third order connections. I don’t have a single authoritative reference on these connections so let me walk you through what I have; sip from the firehose anyone?

First - meet the Hypothalamus.


Note the table of functions; through lesion studies, the individual function of the nuclei have been well mapped.
The output has been tied to projections throughout the body. This should come as no surprise as this is kind of a big deal to staying alive:

This is kind of hokey but it works for me:

The connections from this area to other brain areas have been traced using chemical markers and diffusion as described here:

The end result is a complicated series of pathways. I like this presentation that gives a nice overview but it does not do much with the pathways to the PFC:
https://nba.uth.tmc.edu/neuroscience/m/s4/chapter01.html
The rest of the connected series is pretty good too.

Another summary - note that in figure 9, the bundle off to the forebrain almost looks like an afterthought:
http://www.neuroanatomy.wisc.edu/coursebook/neuro2(2).pdf

The most direct connections out of the limbic system are from the amygdala:

Note that many of these are digested forms of the body sensors after bouncing through first and second order connections from the hypothalamus nuclei.
Some accessory details of these connections:


Let’s zoom in on the connections between the amygdala and the PFC:

and in particular, we finally get to the PFC:

As I had said earlier, these functions are much older than the cortex and are well reserved in mammals.
I hope this passage will inspire you to read the whole thing:

“This is a direct result of how reward-seeking and misery-fleeing behaviour was regulated by the brain of our earliest vertebrate ancestors, and is reflected by the regulation of the emotional response in humans (8,9,22,56). The corticoid part of the amygdala receives massive sensory input (Fig. 7) (21,57) and selects the information which is most relevant for current well-being: this selection process is termed ‘salience’. In interaction with contextual (memorised) details supplied by the hippocampus, it activates hypothalamic and brainstem centres to produce a relevant emotional response (like fear, anger, love, appetite, sexual desire or power dominance)” From the hypothalamus connectivity also exists between the thalamus and mesial frontal part of the neocortex. This is probably the mechanism which affects the motor output of higher vertebrates, including humans, by inducing the drive to seek food, warmth, comfort, etc., or to escape from pain, thirst, misery, etc.


This is where I got one of the diagrams in an earlier post on this thread.
If you collect PDFs for your library you may like this one better:

And the effects of the limbic system on memory:

I have literally dozens of other papers on these connections but I think that this set highlights the major points. These pathways may seem rather trivial but please keep in mind that your entire audio world gets to the cortex through a single nerve bundle. We seem to get a lot out of that single bundle.

Happy reading!

2 Likes

Ha ha…I’ll do my best…how does this sound for a learning algorithm…look for patterns everywhere…when you find one make a memory. Use your memory to make a prediction. If your prediction came true form a memory, If it didn’t forget. If the boss complains too much, analyze what just happened by looking for patterns to find out what the boss is on about… (he’s kind of a jerk but he gets these feelings about things that turn out to be right alot). If you find someone the boss trusts who has more info than you do learn all you can. If the boss doesn’t trust them ignore it. Once you have a bunch of patterns stored, just kind of leaf through them every once in a while (when the boss says its ok) to see if any of them kind of go together…if they do form a memory and repeat from make a prediction.

Makes sense.

My thoughts are as follows…the human brain is the culmination of a series of computing hardware solutions which successively tackled problems in a more and more robust way. Each processor being layered on top of the existing one but without ever not being backwardly compliant. Each new layer builds on the capacity of the last structure by adding patterning and processing capacity. Simple inputs can acquire complex meaning through patterns and processes.

So are the mammalian bodies like relays between the AMG the Thalamus and NC…sort of transforming the signal of each to be compatible with the other? I think it’s sort of clear now that the neocortex processes emotional and physiological information about what’s going on and incorporates this into our memory of an event along with all the raw sensory information.

Why do you consider this to be different that other sensory data?

I don’t…I just didn’t see it in HTM models.

Yet

1 Like

Right…but in terms of how a brain works…wouldn’t the output of some of the older structures need a bit of a transformation to fit the newer structure or vice versa. I assume most of this can be coded in htm but the brain has a hardware compatibility issue each time it evolves a new structure…no? I would think at the very least there are some speed differences if not bit rates data storage etc. Almost like grabbing bits from tape drives, floppies, and ssd’s, or different clock speeds?

Don’t kid yourself. There is much massaging of the other sensory data before the cortex gets it.
Even the eye gets a good deal of processing before V1 gets it.

What I thought…its all the cross talk that makes it so hard to sort out…btw thanks for the reading :wink: I was thinking about this preprocessing in v1 and I wonder accordingly…does the pre processing boil it down to edges …or does it split it into edges colors shapes smoothnesses and such before it even hits the cortex…I would think this would help establish a digital picture much faster. Another thought was about input space allocation…could it very based on predictive ideas about how much might be needed…adjusted on the fly?

I haven’t personally been following this thread with much interest, but I had to comment on this particular point. I think it is important to remember that Numenta’s is NOT to try to create an AGI as @Bitking appears to be. Numenta’s focus is on studying, theorizing, and modeling in software specific individual pieces of the “intelligence” puzzle. If you look at it from that perspective, I think it makes a lot more sense why they may rely on “mysterious” external properties like location signals without going down every rabbit hole tangential to the piece being studied. If you want to focus in on a particular small piece, you need to be able to get a foot hold somewhere and abstract some other important system-level functions.

4 Likes

Good point Paul.

As long as we are clearing things up - I am trying to understand the only working example of an AGI presently available - the human brain.

I am hoping to be able to bootstrap this understanding into extracting the minimum functions necessary to achieve some form of useful AI even if it is not a full AGI; perhaps a very clever chatbot.

I am working through the various subsystems to understand how the local functions are required to fit each into a larger system. I expect that from this I will gain insights into what are the inputs and outputs and further, place constraints on these local functions. There are many known properties that must all be satisfied simultaneously. This acts as a powerful filter to weed out the non-starters in theory space.

In my view, a strict focus on the local functions of one small part of a very complex system makes it very difficult to determine the functions it is performing since you have no idea what it is actually trying to do.

In the Numenta approach, I have seen an intense focus on learning objects and all the bits of the cortex layers are examined to fit that narrow interpretation. The H of HTM is not an add-on to be solved later - it clearly should be an integral part of the structure. A larger computation should be doing something with the object recognition and I really don’t see that there is any “next-step” after that object recognition; I am unable to see how this builds to a larger framework. Perhaps this is a personal failing and there is a clear systems model that I just don’t understand. If so - I would like someone to show me this big picture.

I certainly have no idea what goes on in the Numenta offices - all I have to go on is the published papers and various videos. I will offer that I have not seen every one of the videos so I may have missed something.

2 Likes

Yep, same here – just basing my observations off what has been published and videoed. The focus appears to be understanding how each layer of the cortex might be applied to specific problems, and reiterating/refining when theory doesn’t pan out. We’ve seen lots of modifications to the theory throughout the current round of study (and still a lot more to go).

2 Likes

Hey @Bitking…just a thought…as I try to work my way through the reading assignment…I was looking at the ACC and it seems like it does a learn from mistakes function. As I mentioned earlier we seem to remember what worked and forget what didn’t…unless our prediction was so far off the mark or has such a degree of unpleasantness that we should probably pay attention to it. Could this in fact be what the ACC does? A kind of parity check with an emotional bias to make us replay something that didn’t go as planned.

I don’t think that anyone as the definitive answer.

Clearly, the area receives input from the subcortical structures but tieing that to function is dicy.

I realize that but I don’t think trying a few generalizations hurts in this…it will take a long time to map all the circuitry and we might find out we’re wrong but an imperfect understanding might help…no? Isn’t that how Jeff started?

Not arguing - this happens to be one of the murkier areas.
I do my fair share of claims on the edge of what I am sure of - this is a step further.
The intersection of how the emotion colors memories and how that comes into play in recalling memories is very important and also - one of the areas that the literature seems the least sure of when it comes to mechanisms. Plenty of conjecture - no papers that show for sure how.

I am certain that all “nouns” have some good/bad kind of weighing on them; clusters of adjectives apply to everything. I think that this is vital to how the forebrain selects actions. I think that this mechanism is present in most of the animal kingdom - this is what critters that don’t have speech and symbolic reasoning use to select actions.

This is one of three main areas (along with how the brain does symbolic reasoning and how the lizard brain runs the body w/o the cortex) that I am spending much of my time trying to work out.

1 Like

I want to turn my classroom into a lab!

Do you think anyone will mind if we stick electrodes into their brains and hook them up to recording machines?

Asking for a friend?

Probably…ha ha…but I have often suggested I should be issued a cattle prod…does that count?