How does the brain sense “intensity” if neurons are binary

I’ve been watching some Neroscience videos on YouTube, and i just finished the chapter on the eye. So the videos get into the different part of the eye, the types of neurons in the eye as well as rods/cones, Red Green Blue cones, ect….
But one thing they didn’t explain is about is how intensity of light is communicated to the brain. Since real Neurons (unlike point neurons used in deep learning) fire or don’t fire, (they don’t send analog values to each other ), I don’t see how a single cone or rod can send anything but a color that’s being activated to the brain.
So here’s my theory, and please tell me if I’m wrong or right, but do individual cones/rods have different reaction rated to intensities of light. For instance, if I have 4 red cones I label “a, b, c”. Cone “a” will only fire to very low levels of red light , rod b will fire to mid range of red light and cone c will fire to high intensity of light.

YES, James: Real neurons DO send ANALOG values to each other - i.e, the “timing gaps” between action potentials. It would remain to be shown that ANY digital (i.e. discrete) values are importantly transmitted over centimeter distances, at the low level of cell-to-tissue microanatomy.
It seems simply illogical, to translate the fact of fire-or-don’t-fire physiology, into the notion that “the brain is digital.”
At least as long ago as 1969, in as public a forum as Scientific American, the notion has been current that the key medium for “knowledge transmission” in a nervous system is PHASE.
If we use phase (rather than a “digital” metaphor) as the essence of inter-nervous-system influence, then the “digital nature” of axonal firing seems to be adequately explainable in such terms as “the precision of the analog (continuous-valued) phase signal is best preserved, by punctuating these continuous-valued signals by very-quick transitions.”
I am not the first to point out the illogic required by the pop-neurophysical notion of a “digital brain.” I seem to recall William Powers, around 1970, devoting several chapters of “Behavior: the Control of Perception” to this analysis.

2 Likes

Within the HTM process the changing of weights is a rough proxy for temporal timing. I’m not sure if this update process correctly maps both the LTP and LTD equivalent within biology because the current model is very much time stepped so some temporal resolution and subtlty is always lost.

For larger parallel back propogated networks the changing of weights en mass in parallel (weather they would have been LTP or LTD signalled in a biological equivalent) is not biologically plausible and creates a distorted network that inherently is slower to learn. The weights again are a proxy for time but all changed a little at a time because the approach does not track which specific weights need changing (they don’t track time).

Sensory input may also vary the burst timing as another analog equivalent (firing at say 40Hz for one strength and 60Hz for another). Also the sensory input may create 2 pulse or 4 pulse bursts as another proxy for analog relativity.

You also have to consider that what you “percieve” internally is just an ilusion created within the brain from sensory input that at times is nowhere near the reality of the senses (i.e. we only “see” changes and not a static image). This is where you have to change your perspective from the world we “see” to a strange one of only detecting and dealing with differences.

I believe that the hippocampus is just a very smart phase engine, working on the short term memory buffer, to re-align (all) our sensory inputs, something along the lines of this (missing lots of lines…) :

Every link should then have a relative time (or weight in a proxy) to correctly track relative signal timing (LTD/LTP window) AND strength (burst characteristics). This I believe should then carry the correct analog equivalent.

A 2017 paper says little is known about the neural representation of light intensity, so there’s not an exact answer I guess.

It doesn’t necessarily need to be represented in a single instance of time, like an SDR. It could be a firing rate code, for example. Neurons are pretty fast, e.g. 1/20th of a second is a common duration. It could also be different neurons having different intensity thresholds, I don’t know. It’s probably super complicated.

Animal brains create models of reality. What you perceive is a time-delayed view on a model, with gaps filled in as you pay attention to them. Introspection is a very poor guide to what’s really going on.

That’s one perspective, but it’s a direct contradiction of what seems to be main thesis of HTM: we see some sort of match between expectation and the input. Both have some validity, but are mutually exclusive: change is what you don’t expect. I am guessing that change and confirmation are formed as alternative representations of the input, where salience is the degree of change | confirmation. Change would create new representation: map to some previously blanck / insignificant cortical patch, while confirmation reinforces existing patch. The patch would be some sort of column, I am not sure if it’s mini or macro at this point.

1 Like

Yes and no, I’m referencing the visual cortex from the perspective of saccades for simplicity that create a short burts of activity during the settling after the move (flash whole frame). Peripheral vision triggering by motion (preditor or prey detection). The straw eye view of the world.

HTM is a cortex model and the persistence of new/current sensory input does not really occur there (as it implies that a full synapse is formed in a single LTP event) as it’s the short window of repeated hippocampal replay (short tem window of now) which to me is where the interesting persistence is. At this point think of the way we hold words or transitory senses in that buffer to be able to understand them (fit into a new or recall an existing pattern match) once we have enough of a sequence. It’s a blend of existing cortex pattern recognition/residue with an ongoing sensory stream.

Intensity is where I beleive that the current HTM code does not differnetiate correctly and merges two neuronal characteristics into one weight value : “relative signal timing (LTD/LTP window) AND strength (burst characteristics).” On this point I’d be happy to contradict the current HTM code because while burst characteristics can stay the same for an input the relative timing changes, even if it’s just considering different dendritic path signal latencies.

And with the brilliant efficiency of the way the brain works if you only ever deal with changes (be they new or existing known/expected) then your efficiency is a lot better and will survive winter starvation with conservation/minimalisation of exergyt expentiture. First you have to check all the incomming sensory changes. The bulk of current modelling tends to go everything all the time approach with the resulting massive compute requirement to repeatedly check everything all the time, which is another biological implausibility and just inefficient.

My previous experiments with language (a pre-labelled sensory stream - other senses just don’t have a label attached and are no different…) I misunderstood what was comming out of it, which I hope is becomming more clear, is that the split is one of which words phase shift and which ones don’t. I beleive that this is what the hippocampus learns, phase shift.

The list on the left is candidate splits, unconfirmed type, the center is phase shifting and the right are just cortex known columns/hard senses/concepts.

Might just be pure statistics or not, but that type of split can be seen within one book of input, which to me seems closer to biological efficiency.

1 Like

How can you hear a noise from my piano if all I could do with it is to press individual keys or not? Joke aside: Rate Coding.

I don’t think it’s hippocampus, HM didn’t seem to have any language problems. STM is probably in cortical area currently spotlighted by TRN in thalamus. Hippocampus is probably something like medium-term memory: from several minutes to day-long.

1 Like

The conversations with HM would be the type of conversations you have with the likes of GPT where they are (cortex) patterning type replays of existing knowledge. Existing patterns and knowledge learnt by the age of 27 in HM’s case. You don’t need to learn anything new to hold a conversation if your just talking about what you already know.

If STM was just spotlighting by the thalamus (attention) of the cortex then sequencing persistence would be required for short term memory, which would imply a split of LTP processes in the cortex or the thalamus learns sequencing (biologically lacks adequate resolution).

HM shows that there is more than one type of memory. He was able to learn new non-declarative skills and was amazed that he new these things as he had no memory of learning them.

I thought the decerbrate walking cat was the best clear example of non-declarative memory, which did not need a cortex at all, only a spinal cord. Memory without HTM at all…

For HM : “These results indicate that acquisition and retention of a visuomotor skill rely on substrates beyond the MTL region.” : (Corkin 2002 / Corkin, S. Acquisition of motor skill after bilateral medial temporal-lobe excision. Neuropsychologia 6, 225–264 (1968))

H.M.’s anterograde amnesia manifests as deficient acquisition of episodic knowledge (memory for
events that have a specific spatial and temporal context) and of semantic knowledge (general
knowledge about the world, including new word meanings). : Corkin 2002

Spinal function: this is an example of a third type of “memory,” in this case, genetic programming.

Is that ‘memory’? Yes, certainly there are algorithms tuned over the millions of years to benefit survival, but why ‘memory’?

If so, then every cell in the body has that kind of ‘memory’.

The spine and brain stem are full of “memory” that has been created and refined through evolution.
Example: The amygdala recognizes snakes although you may never have seen one before.

I think of it as firmware.

2 Likes