Intelligence vs Consciousness

Is this in response to my question about the comparison of a machine’s conscious experience to ours? I only brought that up earlier to kind of question the idea that HTM based machines will behave like humans, “do the exact same things as we would”. There are many reasons why I don’t think they will, strong differences in perception being one. Maybe I missed the point @Gary_Gaulin was making? My apologies if so.

I don’t think the conscious experience within species is exactly the same, for reasons you listed. It seems self-evident that different species differ even further. I don’t see anything special about humans or mammals in general that would endow them with conscious experience, but exclude those organisms from which they descended. I just see mammals with an evolutionary advantage in environments where they and their ancestors might co-exist.

The point, though, is that conscious experience and perception of all conscious agents are evolutionary survival tools, designed by natural selection in so many different ways for some reason. The question is, what reason? Our scientific endeavors might only be the result of selection pressure on our neural evolution, and not at all providing our species with insight into the actual causes behind our perception, conscious experiences, intelligence, etc. fMRI, neuronal recordings , etc. could be mere correlations to our experience, not an insight into the actual cause behind it that we should actually be modelling in our theory of intelligence. I’m no evolutionary biologist, but the logic and reasoning is sound to me.

I feel like I’m having a hard time articulating these ideas. It’s such a radical change to how we think about the world that it feels unnatural to even consider it with any veracity. I think @Gary_Gaulin made a great point above,

I think this is the best approach. I hope no one thinks I’m advocating that our research should be abandoned! That would contradict what I spend a lot of my free time doing. :slight_smile: I am thankful the community has indulged me in this discussion.

1 Like

The evolutionary advantage is clear: a blackboard to evaluate the environment a critter finds itself it with all the relevant information in a single place.

The neural plumbing puts the “highest level” of internal evaluation in the temporal lobe with short direct links to the evaluation/planning/selection portions of the forebrain.

Adding episodic memory so that this access can extend to the recent past (up to about 1 day in humans) extends the information available for this evaluation and planning.

The loop from early planning section of the forebrain back to the sensory feed inputs into this personal memory allows selection of this memory portion for recall. This is the “thinking” and “introspection” part of consciousness.

In humans the range and connections of this feedback loop is extended to include portions of the motor drive for vocalization and perception of sounds to the extent that we can form speech. I am certain that in non-verbal mammals there is still some level of thinking and introspection.


Natural selection only tests/challenges designs that already exist on their own accord. All designing takes place in the living things themselves, while they conduct their “arms race” to control each other.

A “fitness function” is only for applications where the virtual critters are not intelligent and on their own compete against each other to woo the hearts of potential mates, or whatever. Instead of starting with a desired outcome there is (unless code was already run to see what happens) no ahead of time knowing what will ultimately develop.

I found it best to completely avoid the fuzzy “evo-devo” line by going with “devo” all the way, by being more specific as to whether it was the development of a zygote, egg, feature, phenotype, genotype, species, phylum, etc… It can otherwise turn into a messy argument with evolutionary biologists over whether enough change has occurred to call it “evolved” and if not then what to call it.

Our base behavior seems to simply be in the systematics of how any trial and error learning system works. It’s a movement control system with a body that in turn controls all it can in its environment. Bugs belong outside a home where people often try not to hurt them and let them be, but when they enter our controlled territory it’s suddenly OK to just squash them.

When we are on navigational autopilot we are no longer conscious of the task being performed. Yet having made a long (unconscious of) drive without the vehicle having any noticeable dents in it and passengers not in a state of panic indicates to us that there was no detectable loss of driving ability. A self driving system (that actually works as well as hoped for) similarly does not seem to have to be conscious of the drive either. It’s best for a self driving car to not even have the ability to “daydream”.

My instincts are strongly indicating that whether an intelligent system is conscious or not has no influence at all upon the resulting behavior.

You’re asking excellent questions, and I’m loving the challenge. It’s an area where I have some experience. Regardless of outcome the exchange should be of service to the community.

Now I have to hope no one thinks I’m advocating anti-science or saying Darwinian “evolution by natural selection” theory can be thrown out of science, it’s just a right tool for the job sort of thing. Welding the two together only makes both harder to use. For that reason Darwinian variables and concepts like “conscious agents are evolutionary survival tools” do not exist in the models or theory I write from them, and should not need to be included in HTM models or theory.

What is learned from modeling how our brain works should in turn apply to the mostly outside the nucleus brains of our cells with their own motors and sensory that even includes hairlike insect antennae for sensing its motion through the environment. In turn the model applies to the mostly inside the nucleus genetic brain and their territory rich molecular networks. It all stands on its own scientific merit. And after drawing it all out the choice of pointer does not much matter so I hope you don’t mind what I chose to use for this one:

For at least myself something like the above is way more science fun to experiment towards. Numenta only has to stay focused on their part, while those who like to explain the origin of species can try to apply the same basics to genetic systems. The result is such a change from the Darwinian view it’s like an ID dream come true, for those who are OK with something that does not leave an intelligent cause up to their imagination. Better that than nothing at all to show there was at least some truth in what the movement was saying. Numenta is in this way not working on something against the premise of the ID movement or promotes “Darwinism” it’s more like a new era of science. Dream remained, and a new world will begin.


it is beginning to appear that there are 3 areas in the brain which form a network responsible for consciousness. So ----
final answer - specific brain structures working in concert are required.


I think David Chalmers defines (hard) consciousness fairly effectively. It is the sensations, such as colors, smells, and sounds that we experience when we are awake. There are many open questions as far as how to explain, detect, quantify, and localize it, but Chalmers has managed to at least partition it off from other peripheral concepts.

Ted Talk

Intelligence, on the other hand, is a very nebulous term that can mean very different things to different people.

We can program computers to perform tasks such as mathematical calculations or playing chess that most would consider human intellectual capabilities, although I seriously doubt that a mechanical adding machine or a chess program experiences consciousness. In fact, you could create a game-playing program that is purely lookup-based (whenever you see this board position, make that move) that I am fairly certain would not experience consciousness, but that might appear to behave intelligently.

Anyone who has watched the occasional motions of a sleeping dog or cat would be inclined to believe that they are dreaming, and that, consequently, they probably also experience consciousness, even though they probably would not qualify for MENSA.

Despite significant continuing progress, we are still very far away from a fairly comprehensive understanding of how the human brain functions, and even further away from how those signal processing functions give rise to consciousness and thought. And due to the nested manner that the brain has evolved, I suspect that the physical reality of human consciousness is probably quite complex.

My best guess is that there could be a certain inherent level of consciousness in any system, animal or machine, IF it constructs an internal representation of itself and it’s surrounding environment, provided that it can remember selected parts of that model (attention) for future recall and analysis. Other parts of the world model that may be immediately acted upon, but are not remembered, might be analogous to what we call the subconscious.

One compelling reason why I believe that machines could be conscious is the hypothetical Gedankenexperiment mentioned by Murry Shanahan where your entire brain is very carefully removed, sliced up, and scanned so that an accurate and comprehensive model of all your neurons can be constructed and simulated on a computer. If you carefully interface this software process to cameras and microphones and ask it if it is conscious, it would insist that it is. It would be able to recall vividly your past experiences. If you showed it a red apple, and asked it what color it was experiencing, it would say that it is the exact same red that it experienced in the past. To it, conscious experience halted when your brain was removed, and resumed when the simulation process began running.


Now we have the problem of there being more than one way to define conscious and consciousness. From the article:

The research might still be useful for determining what is most needed to bring a model “to life” in the sense of being noticeably aware of itself in its environment, has no problem getting around and keeping itself fed. Where that is used as the metric the simple ID Lab critter I made a video for would qualify. In that case digital traveling waves are providing the necessary spatial intuition to wait behind the shock zone for food to be in the clear, not artificial math equations and instructions for what to do at a given time. HTM theory can eventually add cortical areas that I for now bypass by using program variables for the needed position readings. Whether electronic models like this can be conscious in the same way we would be where given an equal amount of sensory information and cortical columns would still be unanswered, but it’s a great start towards explaining how our (traveling waves involved) spatial intelligence might work.

At least there are plenty of good reasons for being thankful that consciousness in not a Numenta research topic, and is just something that’s for what it’s worth being discussed in its forum.

1 Like

Of course intelligence is materially dependent but not a matter itself. After all, intelligence is what you interpret/conclude/sense when you interact with something of different intelligence than yours, otherwise it is not there or anywhere, it is a method of reorganizing the substance.


Seems to be in line with graph Theory when it comes to Computing systems.

I like the idea that consciousness is defined by a contour of connected activity.


You have really enlightening thoughts on this subject, let me first say. But a potential hole in your reasoning is the assumption that there are two independent “truths” of the world; one in our heads and one outside our heads. Experiments in quantum mechanics have proven to us that external reality doesn’t exist until it’s measured (perceived). A particle’s past behavior changes based on what we perceive. Based on this theory, external reality itself is an illusion of sorts and exists only when we are looking at it, at least on the atomic scale. These quantum effects are said to be “averaged out” when you scale up to the level of classical physics, but they are undoubtedly constantly present nonetheless. So how do you resolve your line of reasoning when it is provably true that conscious perception and external reality are intimately linked and influence each other as opposed to brain constantly trying to model an unwavering external truth.

1 Like

I posted this earlier but it seems appropriate at this point to point to it again:


I’m really loving everybody’s two cents on this topic. My take on consciousness is as follows:

I definitely don’t buy into any ideas that consciousness comes in a binary conscious or not conscious. I think my dogs are conscious. I think the birds chirping outside my window right now are conscious. Even the jellyfish in the ocean who lack any semblance of a true brain, are conscious of something. Even a simple sunflower, in my eyes, can be said to possess consciousness (I’ll explain in a second). Consciousness to me is not a tangible facet of a system but instead a qualitative assignment that humans project onto systems that are otherwise just going about their business. The recognizability of consciousness is something that arises in animal’s brains and even jellyfish’s nerve nets or sunflowers as a result of their ability to sense and interact with the world and other recognizably conscious beings. This ability comes in various degrees given the complexity of the system performing the action with humans having arguably the most complex of systems that includes a bunch of neural hardware to sense many different things and perform all kinds of manipulations of that information. So, when a sunflower adjusts the angle of it’s flower to follow the sun as it moves across the sky, it’s clearly conscious of the position of the sun in the sky under my definition.

Turing’s famous response to the question “can machine’s think?” in “Computing Machinery and Intelligence” seems in-line with my simple definition. Human’s don’t monopolize consciousness though it appears so. With this perspective, going back to the original question, machines can garner their own semblance of consciousness with the sea of information and logical manipulation available to them. I recently watched 1995’s “Ghost in the Shell” (which, by the way, excellent movie) that eloquently explores this concept in one of it’s monologues. This is spoken by a fictional program that has become sentient after somebody accuses it couldn’t possibly be alive because it is only a program:

“It can also be argued that DNA is nothing more than a program designed to preserve itself. Life has become more complex in the overwhelming sea of information. And life, when organized into species, relies upon genes to be its memory system. So, man is an individual only because of his intangible memory… and memory cannot be defined, but it defines mankind. The advent of computers, and the subsequent accumulation of incalculable data has given rise to a new system of memory and thought parallel to your own. Humanity has underestimated the consequences of computerization.”


The experiments may have been misinterpreted:


“Misinterpretation” implies the interpretation is provably incorrect, which is not the case here in either your sources or any other that I’ve ever seen. It’s a different interpretation that is equivalently possible given current evidence and theory. My original wording was kinda vague. Either way this is probably veering off topic.

1 Like

I hope you don’t mind my balancing things out with QM related information. From my experience it’s only a matter of time before the topic of consciousness leads to quantum weirdness. And cellular processes take advantage of quantum behavior, which is topical in a forum for trying to figure out how (brain and related) cells work.

This is an excellent video I earlier wanted to include, but I was in a rush to get to my day job. I hope you like it too:


I see daisy chains.
Brain images display the beauty and complexity of consciousness:

Without getting deep into the philosophical swamp, I think the practically useful definition for consiousness is simply: “The ability of a system to represent itself as part of it’s model of the world”, i.e it has the concept of “self”.


Wouldn’t a higher region forwarding an input to L4 in a lower region be indistinguishable from the cortex having sensors (as in the context of the opening post of this thread)?

1 Like

This is essentially what I am proposing here:

Aren’t Intelligence and Consciousness a classification of 2 very different phenomena?

Intelligence is the ability of an entity to acquire adaptability, knowledge and skills to become more proficient with its environment. (An organism doesn’t have to have any idea that its acquiring abilities to acquire the abilities, no?)

Consciousness is the ability to assess one’s own relationship to its reality, environment and consider itself? The ability to have meta-knowledge?