Qualia Disqualified

Yes and no.
This question of qualia springs from the basic formation of “a theory of mind.” Once you know that people have different thoughts than your own this naturally leads to the question of “how far does this go?”
Do they have the same Cartesian theater as me? Do they see things the same way? While these are natural question they have about as much validity as asking about the sound of one hand clapping.

Ask a silly question …

1 Like

I am not talking about people coming up with something as complicated as a theory of mind. I’m talking of how every human sees every other human acting sentient so it thinks is sentient too causing it to then act sentient

Um, that is what the “theory of mind” is all about. If you had read the link I posted you would have seen this:

False-belief task

One of the most important milestones in theory of mind development is the ability to attribute false belief : in other words, the understanding that other people can believe things which are not true. To do this, it is suggested, one must understand how knowledge is formed, that people’s beliefs are based on their knowledge, that mental states can differ from reality, and that people’s behavior can be predicted by their mental states. Numerous versions of the false-belief task have been developed, based on the initial task created by Wimmer and Perner (1983).

In the most common version of the false-belief task (often called the “‘Sally-Anne’ test” or “‘Sally-Anne’ task”), children are told or shown a story involving two characters. For example, the child is shown two dolls, Sally and Anne, who have a basket and a box, respectively. Sally also has a marble, which she places into her basket, and then leaves the room. While she is out of the room, Anne takes the marble from the basket and puts it into the box. Sally returns, and the child is then asked where Sally will look for the marble. The child passes the task if she answers that Sally will look in the basket, where Sally put the marble; the child fails the task if she answers that Sally will look in the box, where the child knows the marble is hidden, even though Sally cannot know this, since she did not see it hidden there. To pass the task, the child must be able to understand that another’s mental representation of the situation is different from their own, and the child must be able to predict behavior based on that understanding.

Another example is when a boy leaves chocolate on a shelf and then leaves the room. His mother puts it in the fridge. To pass the task, the child must understand that the boy, upon returning, holds the false belief that his chocolate is still on the shelf.

The results of research using false-belief tasks have been fairly consistent: most typically developing children are able to pass the tasks from around age four.

Or rather the act of formulating a tom induces a mind

What i meant to say was that the reason I myself am sentient is because I believe others are…not that I think others are sentient because I observed them

The brain is constantly in the business of building models. As it builds a model of sentience it wrongly correlates sentience for acting sentient, since this is the model the self has of itself it becomes fertile ground for building a bridge or something that can manipulate this model easier than if the model was wrong…if an agi were to never contemplate and associate sentience with itself that low hanging fruit between further development of having or being sentient would not have a chance to develop

That’s funny - I came to exactly the opposite conclusion ! If a stroke knocks out those neurons and Mr I can no longer experience or remember redness, then to me that says the experience is localised, and there’s nothing really mysterious about it.The experience of redness corresponds directly to activity in those neurons.

I think I’ve reached my own personal verdict on qualia anyway. It seems to me that from a consciousness/AGI perspective, they’re noise. I don’t see that they’re necessary for intelligence (Mr I, and all blind people, have no concept of redness, and don’t seem any the less intelligent for it). It’s good news from an HTM perspective, as I couldn’t see a natural way to include this ghost-in-the-machine stuff into HTM.

2 Likes

I certainly agree with your conclusions. Now I am wondering if high-functioning autistic folks (very intelligent) have much sense of qualia. I confess that I don’t know much about qualia other than its dismissal in my work.

I agree. Qualia is about subjectivity, feelings, emotion. It’s the old brain showing its appreciation of what the new brain is up to. AGI doesn’t need it. Maybe it will be replaced by our (subjective) appreciation of what it does.

I am not convinced that my toy won’t have qualia! If it is using the same sort of neural processing, and is layered as is the general floor-plan in the human brain, the sensory and processing hardware will learn red and both report and recall it on command.

Why would you expect an emotional subjective response to be localised to the neurocortex? Or do you think we need to model the entire brain: limbic system, hindbrain, etc?

1 Like

Yes, I do think it is necessary to model the functions provided by those parts.
As I have stated before, these components mark memory formation with good/bad labels for rapid judgement in selecting affordances.
This is the foundation for many of the mental operations that power cognition. You literally can’t make judgments without this. We know this because some humans have damage here and we can see what effects it has on cognition.

Are qualia a causal deadend, because I have trouble believing anything that doesn’t cause anything exists. For one its caused us to talk about them, but that seems like fraud to me

Thinking about it , that’s a valid effect. References to qualia simplify communication and makes for less obtuse language and hence thinking in a society

You seem to be missing the concept of contributory causes e.g. something that is neccessary but not sufficient. Here is a good overview Causality - Wikipedia. In complex systems causes are typically multi-factor. The brain is obviously in the category of complex systems.

Regarding AI, it seems reasonable that an intelligent machine does not require qualia. To simulate human cognition it will be required to simulate qualia. Then people will assume the machine really has qualia (and the engineers will show how it only pretends to have qualia - this has already happened with the robot Sophia Sophia (robot) - Wikipedia).

The best scientific explanation about consciousness that I’ve seen so far is from Stephen Grossberg. He provides a mechanism assocaited with consciousness (resonance) and a function (an added degree of freedom for decision making). But the correlation with qualia is less clear for me, he has very detailed explanations of mechanisms that lead to visual qualia. I’ve not seen detailed explanations of why particular qualia are normally associated with particular experiences and why/how that breaks down (e.g. synesthesia). I think a scientific understanding of qualia would be able to answer questions about Inverted Qualia (Stanford Encyclopedia of Philosophy)

You mean chemistry?

No, not chemistry.

I am referring to a mechanism that performs the same essential functions.

Thanks for the wiki link. We can definitely agree that the brain is a complex system ! But qualia can’t all be necessary factors, or losing one of them - eg colour - would mean you couldn’t have cognition, contradicting the example of blind people. So I wondered which ones you think are necessary ? losing any of those would then directly affect cognition, which seems unlikely to me ! I haven’t yet come across an example of a qualia that is necessary, but I’d be very interested to hear some of course !
So the argument for them being necessary looks weak to me.

I’d agree they seem unlikely as sufficient factors (that argument really is a stretch !), so at the moment, my money is on qualia being neither necessary nor sufficient for cognition, which makes me think they’re unrelated to it. Probably an evolutionary hangover from the old brain as david.pfx suggested.

1 Like

This seems to confuse two things:

  1. Are local computations (eg neurons in a dish) sufficient for qualia? I think not.
  2. Are qualia necessary for cognition? I think it depends on the type of cognition.

It seems obvious that without visual qualia you can’t understand things in a visual way (eg visualize complicated interactions).

1 Like

This argument can also be played against neurons: Since we can remove any neuron and still have a functioning brain then that means none of them are essential hence we can rid of them all and still have a perfectly functioning brain.