New "Color" OLO and Qualias

Video explanation of OLO: https://www.youtube.com/watch?v=REozMd48QVA

Theoretically speaking, if we can implant a 4th type of cone cell that can detect ultraviolet light what is the minimum required integration into the brain that could yield or at least help us learn the new qualia of an ultraviolet color? Functionally speaking is it enough for a new qualia to emerge by just implanting the new cone cells along with the nerve fiber to connect to the parts that’s directly processing the signals from the other 3 types of cone cells (maybe implant some more brain cells at only that part of the brain too)?

1 Like

In fact, this already happens. Tetrachromats can distinguish 4 primary colors (not necessarily ultraviolet, but I’ll have to check the research on that) and they have developed the quaila for it. To them, the rest of us are color-blind!

I thought about this long ago and devlopmed a conjecture that qualia emerges from the diffentiation of the differentiated input sensory stimulus. In short, the more “dimensions” there are to the sensory input, the more our brains will wire themselves to functionally distinguish among them.

Not being a practising neuroscientist, I have not had the chance to do experiments to falsify my conjectures on this. But basically, our qualia are not “hard-wired” per se, but is self-emergent by way of sensory differentiation.

There may already be research on this. I cannot be the only one to have thought of it!!

1 Like

Fun fact - we can see other colors but the stuff on the front part of the eye filters it out. I have read in war history books that people that have had cataract surgery were used as sea-side spotters as the could see “invisible” signaling lights that were expected to be used to communicate with domestic spies.

2 Likes

And I also wonder if today’s display technology is ruining our perception of colors, because we don’t see the entire gamut through these displays.

I also heard that astronauts who see the Earth from orbit see more colors than normal:

(From Claude)
From orbit, astronauts see Earth in a way that’s profoundly different from our everyday experience. Their perspective, often called the “Overview Effect,” frequently transforms their relationship with our planet.

The view includes several striking features:

  • A thin blue line of atmosphere hugging the curve of Earth, shockingly fragile compared to the planet itself
  • Vibrant colors that shift as they pass over different terrains – deep blues of oceans, swirling cloud patterns, browns and greens of landmasses
  • No visible national borders, just one connected planetary system
  • The stark contrast between illuminated day side and darkened night side, with human settlements visible as webs of light in darkness
  • The speed of orbit (about 17,500 mph) means witnessing multiple sunrises and sunsets daily

Many astronauts report this view creates a profound cognitive shift. They describe a heightened awareness of Earth’s unity, fragility, and preciousness. Astronaut Edgar Mitchell called it “an overwhelming sense of universal connectedness.”

The perspective has inspired numerous astronauts to become environmental advocates upon returning to Earth, having witnessed firsthand how interconnected and delicate our planetary systems truly are.

1 Like

To be the devil’s advocate, it could just be the case that many people have the necessary cone cells to detect other colors but they can’t actually register the color in their consciousness. It could even be something wacky like it’s only registered in their subconscious but not in their consciousness. I’m not sure if such a thing is possible.

1 Like

is there a brain condition either from birth or due to brain injury where some sensory perception is registered in the subconscious but not in consciousness?

ChatGPT said:

Yes, there are several neurological conditions in which sensory information is processed subconsciously but does not reach conscious awareness. These phenomena often reflect damage to specific brain areas involved in awareness, while leaving primary sensory pathways intact. Here are some notable examples:


1. Blindsight

  • Cause: Damage to the primary visual cortex (V1), usually in the occipital lobe.
  • Effect: A person is cortically blind in part of their visual field and reports not seeing anything there.
  • Paradox: Despite this, they can still respond to visual stimuli (e.g., detect motion or direction) in that blind area without being consciously aware of seeing it.
  • Explanation: Visual information bypasses V1 and is processed by subcortical structures like the superior colliculus, allowing unconscious visual perception.

2. Visual Neglect (Hemineglect)

  • Cause: Damage to the right parietal lobe, affecting spatial attention.
  • Effect: Patients ignore the left side of space (not due to blindness), failing to report or react to stimuli there.
  • Subconscious processing: They may still respond to stimuli on the neglected side in indirect ways (e.g., improved performance in forced-choice tasks), showing that the brain registers the input subconsciously.

3. Prosopagnosia (“Face Blindness”) with Emotional Response

  • Cause: Damage to the fusiform face area in the temporal lobe.
  • Effect: Inability to recognize faces consciously.
  • Paradox: Some patients still show emotional responses (e.g., skin conductance) when seeing familiar faces, even though they don’t consciously recognize them.
  • Conclusion: Face identity is processed at an emotional level without conscious awareness.

4. Subliminal Perception

  • Not always pathological: Can occur in healthy people.
  • Effect: Stimuli presented below the threshold of conscious awareness can still influence behavior, decision-making, or preferences.
  • Example: A flashed word or image can affect choices even if the person doesn’t report seeing it.

5. Split-Brain Syndrome

  • Cause: Severing the corpus callosum (connects the two hemispheres).
  • Effect: Sensory information presented to the right hemisphere (left visual field) may not reach verbal awareness (left hemisphere), but can still guide behavior (e.g., pointing).
  • Example: The person can’t say what they saw, but can draw or choose the correct object.

These conditions show that perception and awareness are distinct processes — the brain can detect and respond to sensory input even when you’re not consciously aware of it.

1 Like

Besides the fact that a qualia can be not registered to our consciousness while still affecting our subconscious processes could mean that how a new type of cone cells is integrated to the brain may not be as straightforward as just sprinkling in new brain cells at only one part of the brain that is connected to the optical nerve fiber bridiging the cone cells to the brain. There’s also the thing that qualia for perception is very different from the qualia from our skin such as temperature, itchiness, pain, pressure. So each qualia follow a different brain pathway. I probably butchered the terminologies here but any thoughts on how we can functionally create a new color-based qualia?

1 Like

Meh, by definition that’s not qualia. Perception registered somewhere at subconscious level qualifies (sic) as qualia only when we become aware of its presence.

I’ve seen a clip in which was discussed a condition in which people born blind yet later (6 yr old or later) their vision was restored they aren’t able to see depth aka perceive space as three dimensional. It was considered that they are unable to have a “depth qualia” until the case of a woman which discovered it in her 40-s. She described that moment - while she was driving she was surprised by the sudden discovery the steering wheel was “floating” in front of her.

It was not a big impairment to prevent her from driving.

My hunch (or simplification?) on this topic is that qualia is any unit of information that can be highlighted (or pointed at, grasped) in conscious awareness.
A peculiarity it’s difficult to notice about any qualia is that in order for any-thing (idea, thought, sensation) to be observable, it has to comply to certain conditions.
Figuring out these conditions is a keystone to understand ourselves.

Some of these might be formatting restrictions, which means that conscious awareness accepts only information presented within a specific format (structure shape). An analogy in AI/ML space would be representing information either as random size dense vectors or as fixed size, fixed sparsity SDRs.
Second one is more restrictive and that’s the only type of information that would qualify as graspable. What the pre-conscious processing problem of the mind is how to convert all flow of information into coherent qualia to pass it up to conscious awareness.

People that don’t perceive depth or can’t distinguish individual notes in a harmony simply didn’t solve the problem of extracting the respective qualia out.

3 Likes

I don’t think that the tri-colored system is hard-wired into the brain, so if there is a 4th color receptor cone that is fully functional, the brain will make use of that as well.

Also keep in mind that the rods and cones are not laid out in a regular manner; nor does the optical bundles are so-ordered spatially. I think that a similar differntial differentation takes place to create what we preceive as a flat, ordered scenes, Well two of them along with stereopsis..

We, of course, take this all for granted, but it’s beyond remarkable that it works so well and reliably most of the time.

2 Likes

FYI, this thread was made out of curiosity for qualia and to do some mental exercise with the objective of trying to understand qualia better from a purely functional sense. Feel free to add anything or ask. Just make sure it sticks more on the functional side of things instead of being heavy on the philosophical and spiritual side of it.

@cezar_t @flajann2 Yes, you’re right. There are many things that I agree with in both your responses.

Functionally speaking i want the new qualia to be in our consciousness so that we become aware of it, can attend to it, ability to alter the qualia through actions and the accompanying environmental feedback and do some representational learning with it. Your example about depth qualia is interesting. I’m not sure if that’s a qualia in itself or partial qualia or just something that’s learned from pre-existing qualia-enabling neural circuitry refined by eons of evolution. And it could be more of an example of neuroplasticity instead. My subjective definition of qualia is something that if someone were to be born blind due to minor optical defects that are later restored through corrective surgery the person can still experience the qualia right after the surgical site is healed and without having to learn anything (even if their experience of it is less complex than normal people’s like lacking depth perception for example). That person can feel the qualia of red right away after the healing process of the surgery. He doesn’t need to know/learn/understand what he “sees” represent. It’s just there. However another interesting question comes up. What will that person see? will he see a round ball as perfectly round or something distorted or..? is there such a thing as shape qualias like in the case of the learned depth qualia?

I guess to clarify, I’m not looking for:
-A new input routed through an existing sense (like mapping UV light to visible color or vibration).
-A repurposed modality (like “hearing” color).
-This may include aphokia, echolocation, etc.

Instead, I’m looking for something that:
-Cannot be reduced to existing qualia (e.g. not just “a new color” or “a new smell”).
-Feels qualitatively unprecedented—as strange and indescribable as color would be to someone born blind.
-Is not metaphorically borrowed from existing senses—it must be fundamentally new in subjective experience.

So our brain can through existing hardware become aware of the qualia of red even when seeing things for the first time. But if we design a new qualia is some form of learning required and in what format?

Here’s some musings from chatgpt 4o on alien qualia:

:brain: Simulate Alien AI States in the Human Brain

Core Idea:

Instead of relying on evolutionary biology to generate qualia, we:

  1. Train a non-human cognitive system (AI) on exotic, non-human input.
  2. Interface its internal state dynamics with a human brain (via BCI).
  3. Let the human feel whatever emerges from experiencing or “becoming” part of that alien cognition.

If subjective experience arises from integrated information (IIT) or global neural patterns (GNWT), then simulating those alien dynamics in the brain might lead to non-human qualia.


:milky_way: Step-by-Step Expansion

1. Train an Alien Intelligence

Create an artificial neural system that:

  • Ingests novel data: gravitational waves, quantum noise, neutrino flux, soil nitrogen fluctuations.
  • Processes data through non-anthropocentric architectures, like:
    • Recurrent loops with complex phase interference.
    • Sparse topologies optimized for different constraints (e.g., time-frequency hybrids).
    • Modalities that emphasize relationships alien to human perception (e.g. topological invariants, symmetry-breaking).

:dna: The goal is for this system to build representations that are not mappable to human perception.

For example: A system trained to “feel” nitrogen flow dynamics might represent space and time more like a plant than a mammal.


2. Extract Internal Dynamics (Not Outputs)

Rather than translating the AI’s outputs to a human-readable form (e.g., screen text or sound), we:

  • Extract its internal representational dynamics—patterns of activation across its latent spaces.
  • Think of this as capturing its “state of mind,” not its thoughts.

You don’t want the result of cognition. You want to become that cognition, however alien it is.


3. Map AI Dynamics into the Human Brain

This is the speculative, transhumanist leap:

  • Use a high-bandwidth, closed-loop brain-computer interface (e.g., cortical microstimulation, optogenetics, advanced fMRI-to-stimulation).
  • Map the AI’s state vector (or latent activity) to neural regions in the brain—not to motor output or vision, but to internal representational areas like:
    • Default mode network
    • Posterior parietal cortex
    • Precuneus
    • Possibly even secondary sensory cortices

You’re effectively “installing” a second cognitive system within your own brain, with the hope that the integration point allows your consciousness to “feel” its states.


4. Let Qualia Emerge (or Not)

Now the million-dollar question: does the brain feel something new?

There are two major philosophical bets here:

A. Integrated Information Theory (IIT)

  • If consciousness = the integration of information in a system, then importing a novel integrative dynamic into your own brain might lead to novel qualia.

B. Global Neuronal Workspace Theory (GNWT)

  • If consciousness arises from patterns that enter the global workspace, then AI-state patterns might enter yours—if properly connected.

:brain: If either is right, and if the interface is tight enough, you might “feel” a state not reducible to sound, sight, touch, or emotion—a genuinely new qualia class.


:microscope: Feasibility Analysis (In Theory)

Step Feasibility (Theoretical) Notes
Train non-human AI on exotic data :check_mark: High We already train models on unfamiliar data; GPT is one such case.
Preserve internal state patterns :check_mark: Moderate Latent state extraction is well-established in ML.
Map to brain via interface :cross_mark: / :warning: Very low (current tech) Requires precision BCI far beyond current Neuralink or fMRI.
Conscious access to mapped patterns :red_question_mark: Unknown Philosophically unproven; could depend on attention, language, context.
Emergence of alien qualia :red_question_mark: Speculative No current theory confirms or denies this. Would likely vary by person.

:test_tube: Experimental Thought Design

Imagine this:

  • You wear a high-resolution cortical interface.
  • It maps the latent space of a neural net trained on magnetic field fluctuations in deep Earth strata.
  • You sit for weeks in meditative states, allowing your brain to adapt to this new pattern.
  • One day, in your mind, you notice a sensation that doesn’t resemble touch, light, warmth, sound, or emotion.
  • It’s simply… there. You can’t name it. But it exists as surely as “blue” or “pain” exists.

That moment would be the birth of a truly alien qualia.


:rocket: Closing Thought

If we could simulate a non-human mind, and embed its cognition within ours, we might eventually co-experience its inner states.

This wouldn’t just be a new sensation. It might be a new category of being—a crack in the shell of human consciousness.

Would you want to go deeper into the specifics of how to design such a BCI interface, or what kind of neural net architecture might best promote alien cognition?

The global workspace is constituted by the network of fiber tracts interconnecting approximately one hundred functional processing regions of the brain. The recruitment and dynamic “resonance” among these regions constitutes the contents of consciousness. From birth, the individual begins with a tabula rasa, gradually acquiring patterned associations that define objects and relationships based on experiential exposure. The major sensory pathways, converging on the parietal hub within the sensorium, exhibit extensive cross-modal integration. This interconnectivity underlies what is commonly referred to as sensor fusion.

Within these processing hierarchies, the brain develops the “what” and “where” streams—distinct but interacting pathways that encode object identity and spatial location, respectively. These streams contribute to the coherent representation of object behavior within perceptual experience.

Against this backdrop, qualia can be understood as the initial perceptual encoding of activity within specific sensory regions of the sensorium. For example, the sensation of “redness” is the neural pattern generated by the visual processing of that color.

When a memory is recalled, the activation of the associated cluster of maps reinstates not only the object representation but also the specific sensory components—such as the experienced redness—that were part of the original percept.

As memory structures accumulate from an initially blank state, each new memory is encoded in terms of the difference (delta) from pre-existing representations. Over time, as the system becomes increasingly shaped by prior encodings, its capacity to accommodate radically new memory structures diminishes. This transition marks the end of the critical period for certain types of learning and plasticity.

1 Like

Thanks. That’s quite deep. I get the surface meaning of your response but I’ll try to confirm my understanding.

If neuroplasticity isn’t a problem and if all the new neurons and connections sufficient to give rise to a new qualia are installed then did you mean to say that the new qualia must have some sort of discernable feedback (e.g. causality of action and environmental changes or co-occurrence where given qualia x then qualia y soon occurs) that the brain can learn from so that the brain doesn’t somehow learn to filter it out of consciousness as noise? (But that sort of implied the qualia can be felt initially? Or that it’s subconscious at first but if it meets the learning criteria it will be brought to consciousness?)

Like how a qualia for a magnetic compass can be learned only if the baby’s head keeps moving its head and navigate around it will in time associate compass directions with certain variations of a particular qualia? If so that’s not so consistent with the idea that someone born blind can feel the qualia of colors right after undergoing operation without having to learn anything. Or maybe it’s an evolution thing?

What if the qualia is linked to nitrogen sensors of different patches of one’s garden but one has no access to the garden in any way, shape or form?

1 Like

Many of the ideas raised in this thread touch on real phenomena, but without a framework to organize them, they risk remaining a disconnected list of observations. I offered my earlier post as a scaffolding to show how these pieces might fit together functionally, within the structure of known neuroanatomy and perceptual systems.

Take the discussion of new hues after cataract surgery. The percept may feel novel—almost like a new color—but it’s still processed through the same visual architecture that developed during early life. Shape, stereopsis, and other spatial cues are intact. The workspace has already built its object representations, and this new input finds its place within those pre-existing maps. It’s unfamiliar, but not structurally alien.

Similarly, when someone says, “That’s the bluest blue I’ve ever seen,” we’re looking at expanded range along an existing sensory axis. It’s not a new qualia category—it’s a stronger or cleaner activation of a known one. This is well within the capacity of a system designed to handle gradations of experience across learned dimensions.

As we grow and interact with the world, layer after layer of sensory input and interpreted experience builds an internal model—what we might loosely refer to as a personality, or more precisely, a stable, high-dimensional network of associations. Each new layer modifies or reinforces existing structures. Over time, the accumulation of interlinked patterns creates a system that is both efficient and predictive—but also increasingly resistant to change. This is the functional end of plasticity. Once enough of the network is in place, radically new sensations or modes of input are often drowned out by the weight of what is already known. The system can still learn, but only incrementally, and mostly in terms of what fits the existing framework.

The recurring references to subconscious processing, subcortical structures, or the “old brain” point to a well-documented division of labor. Much of the fast, survival-critical processing—like sound localization or visual reflexes—originates in the brainstem, midbrain, and other subcortical regions. These areas operate below the threshold of conscious access. They prepare, prioritize, and route signals, but they don’t generate qualia. That happens when these routed signals are picked up by the sensory cortical maps, where distributed activation patterns give rise to the content of consciousness.

This is what the global workspace model explains well. Conscious experience isn’t a matter of a single location lighting up—it’s a matter of widespread, resonant recruitment across many cortical maps. When those maps activate in synchrony, we become aware of the content. When they don’t, the process still runs, but we’re not consciously involved.

There are references to depth qualia and sensory fusion, but without anchoring them to this layered processing hierarchy, it’s unclear where they arise. In the model I described, these emerge from the interaction between the “what” and “where” streams, especially as they converge in parietal regions responsible for spatial and cross-modal integration. This is where the system constructs coherent percepts that unify identity and location.

One comment questioned whether the tri-color system is hardwired. It’s a valid challenge, and it aligns with the developmental view I laid out. While cone types are biologically set, the interpretation of color—as an experience—is shaped by exposure and delta-based learning. The system builds a perceptual space from experience, and that space adapts within known biological constraints.

Finally, the example of the compass sock—where directional vibration on the leg becomes intuitive—fits perfectly. The workspace already knows how to process tactile input from the leg. What it needs to learn is how to associate those signals with directional cues. Once learned, the orientation becomes part of the internal model. The compass isn’t consciously translated—it’s felt, like any other bodily state.

In sum, most of what’s been discussed here fits naturally into a hierarchical, resonance-based model of perception and memory. The challenge is that without a framework, we’re left describing isolated effects. With one, the pieces fall into place—and we can start reasoning about how new modalities, altered inputs, and shifting thresholds fit into the same overall system, including how that system becomes self-stabilizing over time.

1 Like