Could Nociceptor be relevant/critical to developing intelligence?

Well I have a somewhat fundamental question. I have been wondering what objective function is the brain trying to optimize? HTM gives an elegant neocortext formulation but I don’t think the brain is just about modelling sequences. I think modelling sequences is just a mean not an end. Recently, I started to think if there’s some forms of label or objective function that the brain might be trying to optimize in a global sense that then leads to sequence modelling and feature extraction or encoding in the lower level.

Haven had a baby recently, I noticed that the first thing she did after birth is crying. Few minutes later she started moving her mouth around looking for food. As days went by, she started crying when she felt hot or cold. What I’m saying is that, without much ability to process to sensory inputs, she was acting based on pain signals in her stomach, skin, bone or else where.

Could it be that they brain maps sensory data to the pain space and when it’s outside design boundary (specified by biology), then it controls the body to act until the pain equivalent of the sensory data then falls within biology specification. That would mean the brain needs to learn:

  1. Sensory data to pain transformation
  2. Compare result to biology specification
  3. Learn optimum action policy that will return the recognized or predicted pain level to the pain level required by biology.

The hypothesis would mean an individual or an intelligent organism would have this pain specification within its biology otherwise it would not work. That mean the brain would first learn pain to sensory data regression using the result from Nociceptors as the target and then learn optimum action policy that will return the body to biology specification.

My questions:

  1. is there evidence of pain specification encoded in mammalian bodies?
  2. Is there evidence of brain learning association between sensory inputs and Nociceptors signals?
  3. Finally, would this formulation be useful in building intelligent machines?

Please feel free to pass your comments.

2 Likes

I think it’s more complicated than optimizing a single function. Pain doesn’t indicate things like hunger, so there are multiple drives but only one behavior at a time. Different parts of the brain deal with different motives, but in a complex way rather than one brain area per type of motive. At least some of those complexities arise from the fact that emotional states and cognitive states (such as sleep versus rest versus activity) impact processing of motivational stimuli. For example, you wouldn’t feel nearly as much pain or hunger if you were running from a lion.

To answer your questions:

  1. I don’t know if there is any one area which deals with pain. It appears that areas which deal with pain also deal with other things.
  2. There probably is some sort of associative learning between sensory input and nociceptor signals. There sort of has to be some sort of association, or else the organism couldn’t learn to avoid painful stimuli.
  3. Minimizing pain could certainly help build useful intelligent machines, but I don’t think it’s a prerequisite for intelligence. Pain doesn’t convey much about the world, so it’s best to understand the world and then use that understanding to minimize pain. I guess intelligence is a means rather than ends, just like modelling sequences is just a means.

Thanks Casey for your response. Let me humor you a bit, I believe the
reason you are running from lion is basically to avoid the pain of being
eaten by the lion. Fear might just be a mechanism that the brain has
developed over time to motivate to avoid pain. I think the global objective
function would be more of avoiding and minimizing pain where avoiding
forces you to be predictive and minimizing forces your to act.

I guess what I’m saying is that, intelligence seems to be a necessary
invention/evolution developed by a living organism to ensure it doesn’t
violate its biological specification. I think it has less to do with
learning about the world. I believe the organism has found itself in the
world and it has to extract just enough information to ensure it doesn’t
violate it’s biological specification and thus does’t feel pain.

I hope it is not preposterous to say that the diversity of pain i.e. the
complexity of pain detectors will determine the complexity of the kind of
intelligence that an organism will develop. I’m actually surprised that we
are so much in love with the brain and how it processes information that we
may not realize that the purpose of the brain might just be to ensure that
an organism learns to interact with the world in away that does not violate
it’s biological specification. And this specification might be contained in
different pain/discomfort sensors placed all over the organism outside of
the brain.

So I’m thinking if we study the structure and the design of these
pain/discomfort sensors, we might be able to simulate the kind of
intelligence that will be developed by a system that tries to ensure these
sensors are not turned on or are turned off as early as possible when
interacting with the environment and other organisms. I’m not sure if HTM
can attempt to do this at this time but I believe it might be worth it to
see what happens…

I think that biological organisms have been selected by evolution to be stable towards survival. Despite external perturbations (from simple environmental variations to predators), an organism manages to adapt and continue surviving, at least during the standard duration of life for the species.

In order to adapt for survival the organism has developed various receptors , and it so happens that the receptors we call “pain receptors” have been associated (through millions of years of evolution) with events that threaten the survival. Therefore we can say that the stability towards survival is equivalent to taking intelligent action that leads to a minimized state of pain.

I would also say that we don’t run away from “pain” because it’s unpleasant (that would be circular definition), but we run away from it because that’s simply how we are genetically encoded to behave. The feeling of pain is only imaginary and is technically not different from pleasure :slight_smile:

A higher congnitive version of pain could be fear. The intelligent organism has taken the next step and can predict well in advance that an event causing pain might occur (when seeing a lion), and this specific prediction is fear and it also invokes actions minimizing the state of fear (running away until you feel safe).

@dorinclisu https://discourse.numenta.org/users/dorinclisu I suppose we
are both pointing to similar direction. What I was trying to say in essence
is that it might be very limiting to think the brain just chaotically
develop intelligence on its own just by processing sensory inputs. I’m
starting to think of the brain as more of manager of this pain(pleasure)
receptors or detectors designed and specified by biology through evolution.

In fact, I believe the brain’s ability to understand what each receptor
does dictates the growth and development of the brain tissue. I would think
the brain starts out being reactive and it grows slowly until it becomes
predictive but the objective function still never changes. That might be a
possible explanation for the pruning of brain neurons and synapses during
early development. It could mean as the person grows, the brain tends to be
less reactive but more predictive. Therefore most of the reactive cells are
no more required as connections are now strengthened towards the prediction
of activation of these receptors.

Thanks,

Oluwatobi Olabiyi, PhD

email: oolabiyi@us.toyota-itc.com gyalla@us.toyota-itc.com

@oolabiyi
Oh, I think I see what you’re saying, that the brain evolved to minimize pain. Not necessarily just some internal mechanism which reduces pain, but the the internal mechanisms ultimately reduce pain.

I’m not sure everything ultimately serves the reduction of pain or any global reward level. Forming an intelligent understanding of the world usually helps maximize good things and minimize bad things, but sometimes we form intelligent understandings just because that’s what we do. Sometimes, it’s completely useless for survival. Similarly, some reward mechanisms usually aid survival but sometime’s they have no impact or even a negative impact. For example, we crave sugar, which was useful before modern times but generally is detrimental to human survival in modern times. I don’t think there is any global optimization function except in evolution, which isn’t the same as the product of evolution. The product is more of a conglomerate of useful mechanisms which almost always aid survival but only because they evolved that way, not because they are directly controlled by a global optimization function.

There’s no reason to evolve to extract just enough information for survival. We learn about everything we sense (except things which we already have learned about, which can get filtered out by attention). That’s not because every new thing is useful for survival. It’s just because every new thing could be useful. We can’t just learn a certain amount of information about each new stimulus, because there’s no way for us to evolve that ability without knowing everything about what is useful and what isn’t.

In response to your next post:
The brain as a whole might mange pain/pleasure, but that doesn’t mean every part of the brain does so. Much of the neocortex could be purely perceptive, especially certain layers. For example, layers 5 and 6 might use the pure perceptive processing from other layers to try to optimize pain/pleasure. It is probably still capable of understanding the world without reward-focused mechanisms. So intelligence can be reverse-engineered independently of behavior.

@dorinclisu
Not all receptors which are associated with events that threaten survival are pain receptors. For example, the hypothalamus can detect when blood sugar levels (and if I recall correctly, calcium) get too low. Unless you these various receptors pain receptors (even though they have drastically different effects), there isn’t any one thing intelligent action is attempting to minimize. There are many things it attempts to minimize and maximize.

Fear is not just a predictive mechanism. For example, there are neurons which specifically respond to snakes because it’s useful to automatically recognize a snake and avoid it. Proximity to a snake might predict pain, but there is no internal prediction. Just recognition and an automatic response.

1 Like

@Casey
I think you just alluded to the concern that I had why I started this
discussion.

“Much of the neocortex could be purely perceptive, especially certain
layers. For example, layers 5 and 6 might use the pure perceptive
processing from other layers to try to optimize pain/pleasure.”

I think HTM is based on this premise too that the brain learns only online
and it’s self trained based on coincidences between sensory input. The only
problem is that these coincidence detectors are also inside the brain. It’s
like someone believing in it own delusion which he can also choose to
change at random. I think you will end up with just a memory but nothing
close to intelligence.

On the other hand pain detectors are outside of the brain and
are pre-trained by evolution i.e. their parameters does not depend on the
incoming sensory data. We should think about where the real learning will
take place here: with self propagated truth or with confirmed truth by an
external teacher (which cannot be corrupted by self propagated believe)?.

The link below shares some thoughts on why evolution has preserved pain
through many generation of developments. Learning is one of the key reasons.
http://www.socrethics.com/Folder2/Biology.htm

What I’m trying to say is learning is inherently directed by pain detectors
since these detectors are not subject to the delusion of the brain.
So I don’t think there’s a concept of pure perception. I believe the brain
processes perception based on it’s own biases which are pre-determined by
evolution and I’m thinking are encoded in these external detectors.

Also you mentioned “It is probably still capable of understanding the world
without reward-focused mechanisms. So intelligence can be
reverse-engineered independently of behavior”

This is exactly what it seems on the surface, that the brain is mostly
trying to understand the world for the purpose of understanding it.
However, it’s not difficult to see that the underlying motivation
for intelligence is to avoid pain and attract pleasure. So in trying to
avoid and minimize pain it learns as much as possible about its environment
that might be inflicting pain on it now or in the future and it directs the
body of the organism to act to prevent the ongoing and the predicted future
pain. So it acquires right memories, pays attention to right things,
maneuver its body parts correctly, and develop good language to communicate
in an effort to avoid and minimize pain. Unfortunately, while it tries to
avoid and minimize one form of pain, it gets into another and the vicious
cycle continues…

Also, it seems the concept of pleasure is only in the brain itself and
therefore cannot help in learning. In fact, I think pleasure is a just a
negative magnitude of pain (conjured by the brain to control the body or
implement/reinforce learning) and is not directly observable like pain. The
concept of pain on the other hand is not defined by the brain but is well
understood by it using other sensory modality information.

Thanks,

Oluwatobi Olabiyi, PhD

email: oolabiyi@us.toyota-itc.com gyalla@us.toyota-itc.com

@oolabiyi
Isn’t this what reinforcement learning is trying to achieve? Use reward / punishment signals to modulate the intelligence towards a goal we choose.
I would say, we’ve got to be careful on how we define pain for the machine, because if we define it in the literal sense we might as well see it evolve to avoid it, and pulling the plug would be the most “painful” thing. :smile:
But it’s also possible that if we don’t define it in the literal sense, it will never really become intelligent. Is this what you’re implying?

I think I’m misunderstanding something. What do you mean by the delusion of the brain and self propagated belief?

Are you saying that intelligence is learned through evolution rather than experience? Isn’t it nearly impossible to evolve intelligence, because intelligence involves so many bits of knowledge/applications of that knowledge and requires knowledge of specific situations?

We might have different definitions of intelligence. I’m not really sure what intelligence is, so it might be best to talk about requirements for and components of intelligence instead of intelligence as a whole. I’m not exactly sure what these are.

HTM doesn’t assume the brain learns entirely without reward, nor even the neocortex. It just assumes that layers 3 and 4 do not require reward to function. I take back what I said (that intelligence can be reverse-engineered independently of behavior). Behavior is required to learn (at a reasonable rate) about things which don’t change on their own. But people can start to reverse-engineer intelligence without considering behavior yet.

Even though the brain’s ultimate goal is not to understand the world, not all mechanisms required for intelligence require reward mechanisms. The brain can make predictions without reward, for example. Acquiring the right memories is dependent on reward, but that doesn’t mean acquiring memories requires reward. Instead, the brain likely has underlying mechanisms to acquire memories, and it can decide whether or not to remember something. That decision depends on both reward and novelty. Since there are situations in which memories are formed not because of reward, reward likely has no central role in the mechanisms to form memories. A similar argument could be made for attention. You pay attention to behaviorally-relevant stimuli, but you also pay attention to new things even if it turns out that they are not behaviorally relevant.

Even though its attempts to minimize pain from a known source can lead to another source of pain, that doesn’t always happen. There is no vicious cycle, and avoiding pain only rarely leads to another source of pain because we generally know (instinctively or by learning from experience) that certain ways to avoid that pain will lead to more pain, and we select a good option. If we try an option which leads to pain, we’ll try something else, even if that new pain is less than the old source of pain.

I don’t understand your distinction between pain and pleasure. Both are in the brain and both can be triggered by receptors (like taste receptors for sweet things.) Both are subjective and are modulated by the brain in the brain. If you feel sick, you won’t get as much pleasure from sweetness. If you don’t expect a shot to hurt, it won’t hurt as much.

For 3. I beg to differ. The need to understand the world requires just too much analytical thinking for a baby. It’s rather the case that the baby brain tries to minimize pain and keep “stability of the system” (it’s body), then it noticed over the next months and years that it can manipulate the world as means to keep itself satisfied. Think of a bay starting to cry when the source of food and pain-release (the mom’s breasts) go away from his mouth. From here, having to understand the world is just one step away.

This kind of story for what is happening offers the least path of resistance of a theory. And simple theories are better.

Hello all, first time poster here. I thought a lot about this and think I can offer at least a partial explanation. The important distinction is the neocortex vs what Numenta calls the ‘old brain’. Non-mammals have no neocortex but clearly show many of the behaviors you discuss: pain, fear, etc. I believe our ‘old brain’ can be thought of as our programming; seek food, avoid pain, keep breathing. The neocortex observes the same stimuli and the actions of the old brain and learns from them. Eventually it begins to assert control and issue its own motor commands (or maybe this happens right away, just poorly). But to what end?

I started thinking about this after seeing Ogma’s self driving car demos (just stumbled on this yesterday, absolutely fascinating; I know the team are HTM community members). The car is able to learn visual patterns and avoid obstacles after a learning period. But in inference mode, the motor is always running. What would happen if direction AND speed were controlled? Why would it go forward at all? It would need to want to (for lack of a better word), presumably driven by some risk/reward tradeoff to drive the course and not hit anything. In the context of old/new brain, I think the desire to drive the course would be ‘old brain’ programming. Put this way, pain (say, a signal indicating hitting the walls) would be perceived by both parts of the brain, but with a programmatic penalty in the old brain program. The cortex learns to associate this signal with this penalty and works to avoid it.

Hope this makes sense. In case it’s not clear, I’m way more engineering than math or theory.

I think what you are trying to say has a correlation with the old brain and its function which is essentially, genetically hardwired. The neocortex doesn’t have autonomy over the entire conscious space and all your actions and motivations. I think behavior specific actions of humans(specially those of babies(and specially avoiding pain and other survival related things)) pertain more so to the hardwired regions of the brain, than the neocortex.
We can ignore those phenomenon when we are talking about intelligence here. I don’t think the neocortex is trying to optimize any particular ‘function’.

While I agree that some portions of learning are actively controlled by genetically disposed factors( maybe reinforcement learning plays a role here), I don’t think that learning on whole is inherently directly by anything, specially in the neocortex. It might be so that the neocortical inferences are biased by those factors but I don’t think that a developed neocortex(or it’s inferences) is controlled or directed in any considerable way by pain detectors.

Underlying motivation behind the evolution of the neocortex is debatable, although I agree it’s directed by survival. Despite all this, I don’t think there is any reason for the neocortex itself to be biased by these conditions, since the old brain has considerable control over the overall behavior of a human.

Could you please elaborate on this? Isn’t the concept of pain only in the brain itself as well?
Aren’t there neurotransmitters and other chemical associated specifically with pleasure? Pleasure is symbolically the opposite of pain, but I don’t this the analogy is similar to that of temperature, where cold is the absence of heat. I think a particular ratio of neural chemicals are important for normal perception and functioning of the brain.

My take is that the old brain structures are configured to work somewhat like a Hopfield or Boltzmann network with each of the nodes being one of the basic behavior states like feeding, fleeing, fighting, mating, seeking shelter, sleeping, drinking, exploring, grooming, nesting, whatever else you can think of.
The various body sensors are inputs to this tipping the current state to whatever has the strongest input.
The sensory system has memory associated with it that registers the elements of the environment with straight Hebbian learning that takes some time to consolidate.
The memory system can have the learning rate modulated by various sensations such as “reward” or “punishment” which both increases the learning rate and has specific memory receptors to color the stored sensations as good or bad.
Future recognition of the same sensation keys into both recall of the specific sensation and the paired good/bad weighting.
A simple example that most people are very familiar with - you eat something and shortly after you eat it you forget all about it after a short while. It turns out that it makes you violently ill and you vomit. After that event seeing or smelling that food may make you queasy and you have absolutely no desire to eat it. It’s as if the last thing you ate was held in a buffer somewhere and when you experienced pain from the last meal the learning rate and negative association was boosted in the memory consolidation.
This is mostly below the level of the cortex but the amygdala does exert an influence in modulating the learning rate in the prefrontal cortex and coloring the judgement of experience as good or bad.
This adds weight to various learned motor programs and influences the section of the appropriate program for the situation at hand.
In the Rita Carter book “Mapping the mind” chapter four starts out with Elliot, a man that was unable to feel emotion due to the corresponding emotional response areas being inactivated due to a tumor removal. Without this emotional coloring he was unable to judge anything as good or bad and was unable to select the actions appropriate to the situation. He was otherwise of normal intelligence. [1]


This evaluation and flavoring of your perception is done by subcortical structures below the level of consciousness.

[1] Mapping the Mind - Rita Carter

1 Like