Introduce yourself!

Hi @EEProf in your work does sentience require qualia?

1 Like

That is something of a trick question, are you a philosopher? :wink:

Yes, it does…but…I define machine sentience as a machine being able to display the Jaynesian features of consciousness. From an architectural standpoint, it follows Dennett’s multiple drafts theory (loosely) and his Center of Narrative Gravity. The latter incorporates J’s Analog I and Metaphor Me and is what I call the Sentience Engine. Dennett, at least in my opinion, put the whole qualia issue to bed and that’s fine with me.

You have two approaches right now to machine consciousness. One tracks the Neural Correlates of Consciousness (NCC) and anybody deep into HTM is in that camp; i.e., simulate the brain accurately and Voilà! Consciousness.

The other looks at it from a behavioral standpoint and says that if I build a machine that displays consciousness then it must be conscious. Have a long conversation with Alexa, or Siri, or…whatever. Not conscious, far from it. yet, we start thinking that they might be. Also, chat with the latest winner of the Loebner Prize, again, not conscious, but so close. What magic spice is missing, what subroutine was left out? I’ll give a hint: somatosensory awareness. What this implies is either the need for a robot or a very robust simulation.

OK, back to work.

IMO quoting authors doesn’t cut it. AI is software, creating it is engineering based on hard science.

We cannot build what we cannot define. Simulate the brain as accurately as we know how, it will not be enough. Building a machine that (Turing-like) fools a lot of people into thinking it’s conscious does not make it conscious. Or am I a bot too?

A long conversation with Alexa or Siri is enough to show how far we have come and how far we have yet to go. Where are the pronouns? The passage of time? Localisation (HTM’s coffee cup)? The words are there, but the inner model is missing in action.

The worrying part is not machine consciousness, but the opposite: machines with spectacular abilities to analyse and manipulate people controlled by human consciousness. Yuval Harari is worth a read.

2 Likes

Emergent behavior?

1 Like

This is interesting. Could someone turn this into a thread of its own, please?

(@clai or @Paul_Lamb or @Bitking)

It is fair to say that we can build things that develop behaviour we did not expect, intend or specify. Like genetic mutations, in almost every case that behaviour is harmful or even lethal to what was expected, intended or specified. Does that sound like a good path to follow?

I agree with @Falco that this could be a good dedicated subject. Personally, it seems obvious that physicalism keeps expanding as science changes and contemporary science does not have a definitive theory/explanation of everything. To claim what qualia are in the terms of our current science, or to deny that qualia exist in the terms of our current science, is to commit the same mistake - to assume that current science is sufficient to explain things when it is clearly not.

I’m not at all sure that NCC maps to qualia. There are computationalists that will claim this is the case. I find this sort of ridiculous because they have no idea what the algorithm might be but they are sure the algorthim will cause qualia. This is obviously just a belief rather than science.

Here (the HTM community) I think you will find many peole who don’t think intelligent machines will be conscious (as in experience qualia). Personally I suspect that qualia are more closely connected with life than with intelligence. It seems likely we will solve autonomous intelligent machines well before solving how to fabricate living systems. So I suspect it will go in that order - the machines will help us undrestand what life is and how to fabricate it and that will lead to a scientific understanding of qualia.

Here you go:

I’m curious about qualia. If you assume that there’s a collection of neurons somewhere (in V1 I suppose) which fire when you see something red, then they would have a 1-1 correspondence to the sensation of redness - see a red car, imagine a red unicorn, listen to someone spell out the letters “R-E-D” and those same neurons should fire every time. So “experiencing the sensation of redness in any way” would then be a synonym for “this specific group of neurons is active”. If that’s the case, then it seems to me that there’s no real depth to qualia : “redness” in all its forms is just the label we adopt to denote that those neurons are active. It seems such a simple argument that I suspect I’ve missed something in the definition of qualia : can someone enlighten me ?

2 Likes

Hello.

While I’m not a scientist (yet), I do have a fascination with artificial and biological intelligence, which has lead me to reading about the neocortex (and other parts of the brain) and the various theories about how it works, which led me to learning about HTM and finding this forum, which has given me plenty of reading material and ideas to read and consider, respectively. Like many on here I don’t think deep learning, at least as it is now, is the key to AGI and view HTM/TBT as a useful piece of the puzzle that is general intelligence.

4 Likes