In spite of the fact that Watson denied consciousness existed, and behavioralists to this day contend that it does not, it is the OS of the mind and animals do not have it. Animals, of course, “detect anomalies, choose strategies to solve problems, learn from experience” but they do it with a lack of volition. It is all just S-R to them.
Think about a microcontroller, like a 6811 or a PIC that is embedded in a device, say a microwave oven. The software in the device is frozen and it only does one thing, it runs the oven. Now, you could have a very sophisticated microwave (µwave) that could sense what was put in it, let’s say a cup of liquid, or a plate of food, and then reheat that food depending on that information. It could also learn your cooking habits and anticipate what you wanted. Let’s say that this particular µwave had to warm up its magnetron before full activation. It might know that every morning you got up at 7 and heated coffee. So just before 7 it would cycle it’s magnetron in anticipation of your waking. In spite of how intelligent you think this µwave is, you won’t be discussing Nietzsche with it.
So then you say, “Wait, we could add NLP to it so that it could converse!” We all know that is called ‘Alexa (choose your favorite virtual assistant) enabled’. Now your µwave can greet you as you heat your coffee, tell you the weather and see if you want to put more coffee in your shopping cart. It can also tell you where coffee comes from (go ahead, ask it). Surely your µwave is now at human-level intellect and all of that with just a microcontroller.
To be fair, at this point the system probably has an OS, but all that OS does is make the job of programming easier. Yet, there is something missing. Ask it what it did yesterday, the answer will be that it does not know. Ask it what its plans are for the day, it won’t know. At this point you may protest and say that those capabilities could be added to the response repertoire, but it would be empty of meaning because the system would be parroting an a priori response, not thinking. This is where a three-year old child is in its thinking. Then something astonishing happens with the child (the ‘astonishing hypothesis’, see Crick) and what this is can be gleaned from looking at the work of Vygotsky & Luria and to some extent, Piaget–but we digress.
Let’s now say that we back-up and build a robot. Let’s also say that the robot is equipped with all of the features of our µwave, but instead of cooking it has to get around in its environment, sense things and respond to them. We equip it with NLP and now we have what appears to be a very intelligent machine, but again it lacks whatever it is that makes us human outside of language. Humans are the only creature to possess recursive language (see Chomsky) and not all humans have it because it has to be learned.
So now we equip our robot with a specialized OS, let’s call it the sentience engine. Like all OS’s, this one ‘manages the resources of the machine’. But this OS is self-organizing. Unix has some self-organizing features and Carpenter & Grossberg wrote an interesting paper on a self-organizing neural pattern recognition machine, but we are not there yet. This machine must somehow get to a metaphorical structure in its thinking, in its operations as an OS. As Lakoff has shown, it is via metaphors that we live. But how do we get it to do this?
What has to happen is an overlay of language to the physical mechanisms of stimulus and response in our robot. As our robot learns its environment, it is building a lexical field in memory predicated on conceptual metaphors. Bingo! This robot is now learning concepts and the incredible, yes astonishing, thing about concepts is that they can spawn other concepts. To see this, read David Bailey’s dissertation (he studied under Feldman, Lakoff and Wilensky) When Push Comes to Shove: A Computational Model of the Role of Motor Control in the Acquisition of Action Verbs.
This robot would now begin constructing its own internal narrative. You could ask it, “What’s your story?” and it could tell you. It could move forward and backward in time, think back to what was and conceptually thinking what might have been. It could project what could be and anticipate and create anything, including reasoning about what is possible physically and what is not. This is what consciousness is and what it could allow AI to do. It would develop an analog of itself, let’s call that an Analog I, where it could do this mental time travel, this internal modelling of what was, what is and what might be. It could also ‘see’ itself metaphorically moving about in its conceptual mind palace, a Metaphor Me if you will. This is what it means to be conscious. I’ve left out a few details (), but this is the gist of it.