It is a simple question about color perception, maybe HTM theory face the same question.
We can use three kinds of photoreceptor (RGB) to receive color stimulus, but how to distinguish the two pure color stimulus in high level layers?
For example, a green color input and a red color input have the same stength and fires in the same frequency, and they are both pure color (meaning do not need to combine two or more photoreceptor inputs)? And then how to know which is red and which is green?
I think Spatial Pooling face the same issue.
I think you may be talking about the abstract concept of things being different colors, or color as an attribute of an object. I don’t think the high-level pattern is going to look anything like the low-level patterns that actually identify different colors in the field of view. The color red as a concept would look much different than the SDRs representing reds as they unfold closer to the sensory input.
The short answer* is that a particular input line always represents just one color. Which input it is carries the information of which color it is, and the firing rate says something about the intensity (* short here means wrong, but it’s not a terrible approximation.)
A much better answer is in a book by David Hubel called Eye, Brain, and Vision (text available online!). Chapter 8 is all about color but I do recommend going through the chapters in order if you’re interested enough in vision to want to understand how colors are coded in the brain.
Thanks so much for the reference.
“Which input it is carries the information of which color it is”, this is the key point, who knows which input carries what color information. Assume that you are one neuron at a high level layer, and there are two inputs connected to you, the first one is a red detector(meaning where red color comes it will fire), the second one is a green detector, now the firstone fires, how do you know it means that it see a red color, because you don’t know the first one is a red color detector, what all you know is just that your have two inputs and know one of them fires.
I think this is a more common problem, as we know, SDRs representing different concepts are also very different, so the high level layer can very easily distinguish them, however, when the input is not complex it will not work, and is very simple just like the two base colors (red and green) in the retina, LGN and even V1 area, only two kinds of cells will fire for them respectively. e.g. in V1, there is one kind of neurons which will fires when the red color fills its receptive field and another kind of neurons which will fires when the green color fills its receptive field. So the high level visual cortex area will see some neurons fire when we see big red(we can simplify the environment where there is only one big screen in front of our eyes, and the screen just show red color or green color), then the question will become how we distinguish red color and green color, except that the brain has very different encoding ways for different colors. For example, one way to distinguish differenct colors is to encode wave length of light into fire rate, however, it seems that our brain is not able to compare two color. So the reference has the answer about this?
to @rhyolight again, how HTM network represents different colors in sensory input? just RGB values? a different ways from human beings.
There are still people working on this in nupic.vision, but I don’t know anything about it. Older versions of HTM used to do video processing, but they were abandoned because of new more biologically-constrained theory implemented in the current theory.
I don’t remember the book specifically addressing your underlying question, but it’s well worth reading (and will clarify some points on how color processing works in primate brains). I think the answer to your deeper question is that “red” is the label we learn to assign to particular patterns of input. There’s no intrinsic red-ness to red-on/green-off cells, you can’t tell them apart from green-on/red-off cells until you start showing different colors and examining the resulting activity. Over months of early experience you learn that these patterns are present when the Big People say “red”, and also when the round crunchy sweet food is in front of you, and gradually you weave your own personal fabric of associations around that specific pattern of input.
“red” is the label we learn to assign to particular patterns of input
Yes, that’s right! The key is particular patterns.
What are the differences between red pattern and green pattern?
In physics, the only different is that they have different wave-lengths.
Different photoreceptors have different pigments and thus respond to different wavelength ranges. Most of us have three varieties of cones and so three different color signals that project to the next stage. So far as anyone can tell, the arrangement of projections is random and so it’s a learning problem (the first learning problem, really) to turn the input into behaviorally relevant information.
Sorry, I didn’t understand this sentence? Is that mean in biology, different colors will have different random representing values?
There is a theory about this issue called labeled line theory.
labeled lines theory: Different qualities of a percept such as hot versus cold, sweet versus salty are encoded by separate group of cells. The information within a series of cells transmitting information to the brain remains within separate channels or lines. @karchie
The cells in the retina (which is itself part of the brain) are arranged in layers, but very differently to the neocortex. On the outside, photoreceptors change their activity in response to light, and a complex circuit of cells in the middle layers use integration and inhibition to feed signals to the output retinal ganglion cells on the inside of the retina. Some of these cells encode the color of the image, but only a small number of these encode the intensity of red, green and blue (as happens in a digital camera). Most of them rather encode the opposition of two colours (red vs green, blue vs yellow). As @dfx mentions, we have cells which each respond to one direction of these axes (as well as the ON-off, OFF-on axis rod-fed cells), so there are four types: R-g, G-r, B-y, Y-b.
Perhaps the best coverage of recent discoveries of retinal structure and function and colour vision by Christof Koch.
The individual projections of RGCs is indeed random, but starting in early development, “training programs” of spreading activation are run in the fetal retina which send patterned signals up through the visual pathways, and so help organise the specialisation of groups of cells in V1 upwards, and so the organisation in cortex looks very organised (mosaic-like) and not random (see around 1hr in to the second talk).
Wow, thanks a lot, a clear description about the basic structure. But I still have some questions?
How does it encode a color? Does this mechanism put very special information into the color encoding? Is the special information same for the same colors and different for different colors?
@fergalbyrne good point on the spatial arrangement: early learning is a self-organizing process that, starting with random arrangement, produces still-random but statistically consistent spatial structure in the projections (e.g., center-surround in retina/thalamus, blobs in V1). Ken Miller did really pretty theory on this 20 years ago for ocular dominance and orientation columns.
Well, the notion of “value” is a little suspect here. There are photoreceptors, each kind has its own color sensitivity (there’s the start of your labeled lines), and then where those lines go is random but shaped by a learning process (as @fergalbyrne mentioned). Over time the brain learns to make sense of these inputs.
@dfx In general, a neuron “encodes something” when its firing correlates with the presence of that thing. Depending on what you’re encoding, how you’re measuring, and over what timescale, this might involve its rate of firing, its probability of firing, or simply whether its firing at all. Different sensory neurons have evolved to change these in different ways - so for example rod photoreceptors lower their rate of firing as the number of photons increases, while some force-sensitive skin receptors raise theirs as the force increases.
To simplify in HTM we just say that a neuron is a 1 (firing) or a 0 (not firing) in a given (artificial) timestep, indicating the binary presence or absence of a feature. This models the kind of non-cortical preprocessing which prepares sensory signals for the neocortex.
In the case of “colour-encoding” neurons, they usually produce spikes for the onset of a particular “colour”, and spike faster in proportion to its rate of intensity increase (or vice versa for offset-sensitive neurons). By “colour” I mean the broad spectrum response of the corresponding cone receptor.
It’s a little complicated. Neural coding tends to be more about contrasts and edges (center-surround and on/off response types, to start) and temporal changes than a direct representation of the external world. So a given cell increasing its firing rate might mean “this part of the visual field is now more red and/or less green than it was a little while ago.” The same cell will decrease its rate if that same place becomes more green/less red, or if nothing changes.
@karchie is correct. For reasons of efficiency, most sensory information is in the form of rates of change (spatial/temporal), or differences. These signals are integrated higher up to produce more persistent representations. See https://www.youtube.com/watch?v=QxJ-RTbpNXw for a camera based on this idea.
I am moving this topic into the new Neuroscience subcategory, since it doesn’t really have anything to do with SP, but is more about the neuroscience.
Yes, firing rate could represents stronger or weaker, lighter or darker. But when the same firing rate occurs for different kinds of sensory input, how do you know which input represents what type of receptor.
For example, In HTM, when there are two neurons are both 1, how do you which neuron represents a red color or a skin pressing or a sound input?
In another words, if no labeled information, how dose a robot brain to know whether a sensory input is from its ear, its eyes, or just the lower level neurons?