Intelligence vs Consciousness

I think it is partially answerable, and the answer is that it’s fundamentally not different.
This is basically the theory of reincarnation in hinduism, that claims there is one single eternal entity that inhabits every sentient (and I assume intelligent?) being that ever was and will be, once at a time.

1 Like

If you remove the part of your cortex that represents a specific part of your body, then you will no longer believe that version of you is true. It happens to people. You could probably remove your ENTIRE body, and you would still believe you had one.

It works in reverse. If you cut off your arm, your model will still retain that it exists and is attached: phantom limb.

So, no, I still think it’s justified to physically separate the brain from the body, and it’s accurate to say the model generating thoughts is in the brain only, primarily in the cortex.

1 Like

Of course, and obviously I wouldn’t have asked the question if I didn’t believe anyone else on the forum reading this could possibly have experienced the same phenominon. :smiley:

My question is what are your thoughts about this phenominon (not asking why it exists). Again, it is slightly different than the point Curt is making about the separation of thought domain from the physical senses due to lack of overlap.

For example, one might ask is it even ethical to create a conscious AI, knowing the cruel reality of what that means. Knowing the panic from self-preservation systems that I myself feel when I consider it.

Here’s the proof for the one “self” theory (partly derived from Wittgenstein):

  1. All things are either context or content. The universe is composed of either things (can be belief, ideas, emotions, or any other thing distinguishable with language) - or the space in which the things reside.

  2. Content (things) depends on its boundaries (that which it is not) in order to exist. A tree without boundaries would be just a lumpy part of the landscape. In fact that is what creation is. Creation is the distinction of something by giving it boundaries. (…and also there is another component which is to distinguish it with language)

  3. Context is the space for things to exist (not quite space because that is just another thing, but that is the easiest way to think of it). It doesn’t have boundaries, (if it did, it would be another thing or content). Context is not the space one gets when all of the content is collected together or observed - it cannot be derived by assembling pieces together or pieces of content - it IS the space - it has no boundaries - and doesn’t depend on boundaries in order to exist (as does content).

  4. Human beings are not trees. Human beings are not ideas. Trees are trees. Ideas are ideas. Ideas are not Trees and Trees are not ideas. One thing is not another thing - it is only itself.

  5. Therefore… The usual way we take ourselves to be, the usual way we conceive of ourselves, which is as a collection of traits and characteristics (he/she’s smart, he/she’s athletic; he/she’s old; he/she’s shy etc.) - is an illusion. We cannot be things other than ourselves. We are not something else, we are who/what/where we are.

  6. So if we are not the things with which we associate ourselves (i.e. our traits and characteristics, our names, histories, likes/dislikes etc.) - then we are the only thing that is left. The Space for things to exist / Nothing (remember if the ontological form of context - or the BEING form - is NOT a thing but the space for things, then it doesn’t have boundaries - and therefore it is Nothing (the ontological form of nothing, not the linguistic or conceptual form - which IS a thing). I don’t want to devolve into the silly arguments about the distinction between the way ideas are communicated (using things) and the things themselves.

  7. So if it is true that we are not the content we associate with ourselves and use in everyday life to identify ourselves; and what we are is actually the context or space for those things to exist (everything/nothing)?

Then I can’t have my context and you have yours!

Because that would make our context a thing! (i.e. it (the contexts) would have boundaries)

  1. So there must only be one context. We must all be one! I’m not religious but if Jesus existed, this would be what he referred to as us all being “brothers and sisters”.

  2. The challenge is this: How to reconcile what may be the truth about existence - against our experiential data, which is the experience of ourselves as located in, near, around, at - our bodies and as separate phenomena! We don’t really experience life this way, we just think we do. We have inherited this way of talking and thinking about it.

(#9 is the goal of religion and philosophy in my opinion)

I find this interesting to think about… :slight_smile:

P.S. Also… It doesn’t mean anything. and… it doesn’t mean anything, that it doesn’t mean anything! :slight_smile:

P.P.S. What one does with that is not give up - but CREATE what life means as a stand for something with the knowledge that, yes it is a self-creation, but that doesn’t get in the way of us taking a stand for something and determining what life means for ourselves - we don’t have to devolve into chaos because there is no inherent objective meaning - it actually clears the canvas to allow us to create what has meaning in life for ourselves and not be a downstream effect of some other cause (i.e. paint on top of a full canvas) - be the cause (or at cause as the buddhists would say).


I’m glad to see that consciousness is on the topic. That’s I’ve expected for some years.

Before I heard Numenta likes to avoid argument of consciousness. However at this point I can see that consciousness is on the topic.

As you may have already known, there is a consciousness theory: Integrated Information Theory (IIT… see Tononi). And though you may be surprised, IIT has high affinity with HTM.

Please see linked. This shows a hint of consciousness theory, and also shows a relevance between HTM and consciousness and IIT:

1 Like

Hi @mambo_bab_e,

Just want to be clear - this is not a “Numenta topic”, as consciousness is not an area of focus for the company and research. But it’s definitely an interesting topic and discussion!


I understand. However, I believe some people will talk about consciousness at the forum again and again in the near future, because trend will come. Time will come to think of additional mind for a company to progress to the next step.

I expect someone in Numenta begin to talk about consciousness, even if he (she) is only one, because HTM should have destiny close to IIT.

I will keep research of consciousness as always.


Hi everyone!
Consciousness, emerging from neuronal networking of some kind, is a highly complicated certainly relevant issue, but it doesn’t seem wise to start reverse engineering our brain from there. Maybe some neurones preconditioning some other neurones through their axons constitute the archetypal kind of “consciousness” for the latter.

1 Like

I guess I’d define a consciousness (“a” is intentional) as a system whose parts all perceive, generate, or imagine the same set of representations of underlying structures. Basically, it can only form one thought at a time.

That means the brain is a bunch of consciousnesses, each of which can merge partially and separate based on tasks and such. There is evidence for this.
When the 2 hemispheres are split, each side can disagree on an action. In one case which might be unreliable, a split brain patient had a language center on both sides of the brain, and he or she verbally argued with him/herself. So there can be multiple consciousnesses in the same brain.

Everyone probably can experience signs of other consciousnesses in the same brain. If I point my attention to something, I become conscious of it, presumably because some cortical regions send that information to whatever highly interconnected group of regions is writing this. I can also get alerted of something which was outside of my attention, maybe in the same way another person would do so. I also have hunches, which probably result when another consciousness cannot fully communicate.

I don’t think most things, like rocks or muscle, can be conscious. They can react to the world, but they cannot represent the cause of the input to the rock or muscle in terms of what structure in a consistent way. Maybe they can be a little conscious, but barely.

This doesn’t mean there’s some sort of hidden mass suffering, in your brain or computers or whatever. Without emotion or similar things, it’s just a dynamic representation.


Well, put it this way - a person could be both ‘intelligent’ and conscious - but they could be as dumb as a rock. A person could also be very conscious of how dumb they are.

Consciousness could be thought of as the captain of the ship, whereas the intelligence is the subconscious that drives the rest of the ship (navigator, crew, etc.)

A person can lack being conscious but be very ‘intelligent’. Of course, a person could also be very conscious of how intelligent they are - which usually leads to arrogance.

Computationally, the brain has a lot of blurry lines.

[arg… the ambiguity in the word ‘conscious/ness’]


Consciousness may be seen as a human brain SDR but it may be the best paradigm of a non-communicable one. I am talking about consciousness as a feeling - not as a description. Learning is an experience but not all experiences are learning. All of the human experiences may be comprised of SDRs but not all SDRs are portable from one brain to another. The information that is included in the SDR of my consciousness -including both conclusions and feelings- can not be transferred to your brain network since it includes all of my past, so ostensibly, all the SDRs of my brain should be transferred in that case.
Only highly encoded audio or visual signals (e.g. words, sentences: language) my be portable SDRs, and even then, when those SDRs contain the experience of the transmitter, they need to be arbitrarily translated from the receivers to one of their own experiences. Actually, language is the best known means of transportation for everything our mind contains, but a very inaccurate one for feelings and personal experiences. Communication between human brains is much better when it ignores metaphysics (axioms) taking it as granted and involves only the logical arguments. In this context, Mathematics seems to be the most accurate language among all. The understanding of Mathematics is also a conscious feeling, but it can not “understand” the whole of the simplest conscious experience.
This approach to the issue of consciousness does lead to a dualistic view of the human brain but not of the thinking process and not of the universe. When studying the human brain, we should not ignore the lower parts of it. The neocortex can describe lower brain feelings but not truly experience them. Even if the SDR view of the neocortex is a holistic one, it should take into account that even if all SDRs can be encoded in a language of some kind and thus communicated, some of them might not be accurate enough since they describe events encoded in the language of lower brain centers - not a fully translatable one.
Therefore, I do not believe that our brain thinks dualistically. Dualism emerges when thinking attempts to access feelings and consciousness is a feeling as well as a thought. This seems to me as inevitable to happen when a feeling mammal (e.g a human toddler) progressively wears a neocortical mathematical hat as they grow up.

1 Like

I think no SDR’s are directly translatable anyways without the minimum context. Let’s think about the old famous question, that even kids ask:
“Do we really see the same colors?”. The answer is both yes and no.

Yes - because that is how the language works. We have been exposed to the same objects and grew up associating them with the same words. When invoking the words, we invoke from memory the experiences that were imprinted by the same objects, so for all practical purposes, they are the same colors.

No - because, despite our identical physiologies in retinal receptors, the birth of ‘color’ is in the brain, and it is extremely unlikely that a certain color will produce exactly the same neuronal activations in 2 different brains. I assume there is a great deal of (systematic) randomness in the way information flows.

First, I think the optical nerve pathway performs some “hashing” due to the way the tissue has grown. Second, the neuronal activity itself has a chaotic component. Yes, once the representation of color in a given brain has formed it will stay that way, but the exact way it forms the first time is unpredictable due to the chaotic dynamics which make the system very sensitive to both minute variations in current input and to past context (bias). And yes, the activation will happen in the same visual areas of the cortex, but I am talking about the fine granularity SDR.

My point is that if you take from each brain the ‘blue’ SDR’s, the topological overlap between brains will be minimal. While the SDR’s are logically identical, they are hashed differently. “De-hashing” happens automatically through speech, and that is how 2 brains can agree they both see blue. But if we want to, let’s say, read an SDR from brain 1 and implant it into brain 2 by external (artificial) means, we need to know the hashing functions to perform the translation - and that might be a very difficult task. This issue might have implications to mind uploading.
I do not have evidence for the differences at fine granularity and if that is not so, please correct me. But we know at least that cortical neurons are not hardwired to color - a study on monkeys found that although their retinas have no ‘red’ receptors, after a gene mutation that converted some receptors to ‘red’ they started recognizing red objects.

Now, if we consider the conscious experience as a mystical soul possessing intelligence (which is a valid point of view, I’m not trying to put it down) then we can keep wondering about the question and especially about what is it exactly that we might “see” different in the same color?

1 Like

curt: To duplicate human level performance in our systems we will need to duplicate the same powers of classification the brain uses and the side effect will be an AI that has this same type of perception confusion as humans have – they will be “conscious” in the same way humans are.

Very challenging, in that my dream for AI is that it could teach us to see more than we do, analogous to inventing new colors (e.g. ‘blue’ is new). This raises the question of extending the nature of consciousness beyond classification.

Consciousness as we know it is related to connection - I wonder what the thoughts of a solitary octopus are? (some octopi are social). Beyond static classifications, there is the timing aspect - the feeling of consciousness for me is intimately tied to feeling present in the moment, and this can be heightened by the variable of another creature present exhibiting similar awareness.

In a way, I think consciousness that neural nets approach is like muscle, and consciousness as we feel it also involves neural analogs to heart, lungs, kidneys, and liver, or to a jury deciding a case.

Add random association: mention of “… a biattention mechanism [Seo et al., 2017, Xiong et al., 2017]. The biattention first computes an affinity matrix A = X Y ⊤ . It then extracts attention weights with column-wise normalization … which amounts to a novel form of self-attention when x = y.”
1 Like

What the brain can not find, is any associations between our thoughts and the external world – due to the fact that the eyes can’t see neurons, and neurons that can sense thoughts, can’t see. We don’t lack sensors, the sensory scope of our sensors are isolated, like having our eyes in one room, and our ears in a different room, so that what the ears were hearing, could not be correlated with what the eyes are seeing.

@curt, what do you think would be the impact of removing that isolation with some fancy apparatus?

Imagine some kind of high precision, low latency brain imaging device that relayed the firing of those thought-sensing neurons back into the path of some sensory input (e.g. augmented reality glasses, or headphones).

Do you think that over time it could reduce or remove the illusion of consciousness?

1 Like

Yes, I think it most certainly would remove the illusion. I’ve argued just that point many times. “All” you need is a high-resolution real time brain scanner. If the brain can “see” the activity of the neurons with the eyes, or ears, or fingers, then the brain would make the connection that our “thoughts” were physical actions of our neurons, and the illusion would vanish from that person. Timing is critical, however. The brain’s association learning system won’t link the events if the timing is delayed too much. Just as if the sound track from a film is delayed too much from the video, the voices of the people talking “disconnect” from the images of the moving mouths. I don’t know how close the timing needs to be but I assume something in the range of 100 ms or less.

It would take time however, it wouldn’t just happen instantly. The person would need to be subjected to this experience of seeing their own brain behavior for weeks or months to allow it all to get wired correctly to remove the illusion. But maybe even a small exposure to such an experience would do a lot? Maybe just some electrodes in the brain connected to the area where a given thought selected by the person would cause the neurons to “click” a speaker would do a lot? It would be good to see experiments done to test this…


Relevant article by giants of consciousness research Christof Koch and Giulio Tononi: Can We Quantify Machine Consciousness?


Regarding IIT and quantifying consciousness, here is Scott Aaronson’s critique of IIT. He’s had an exchange of posts of sorts with Giulio that should be accessible from the text:

1 Like

Excellent discussion of consciousness. it’s great to read and try to learn and understand the various theories about the topic. I’m still struggling with idea of dualism. I don’t believe I’ve ever really thought of my brains thoughts as being anything but my brain and body reacting to the “associative learning” that
I or my life experiences have programmed in there over time. I see/think of the differences in people’s perceptions of their lives and experiences as simple differences in the “associative learning”.

From one who was an early in life athlete , to now has sleep apnea ( with associated sleep deprivations issues) my own perceptions of my brains “Thinking ability” is highly variable on a day to day basis and I can feel it, recognize it, even plan my technical work versus physical work around " how effective my brain is being currently" .

I can still use my bodies experienced nervous system to control a shovel, a tractor, or a screw driver when I am aware that my higher level cognitive skills are slightly off/slow today. So, I don’t think of my brain / consciousness as something separate from my mind/body. Is this in-line with getting away from Dual-ism or am i just confused ?

1 Like

@t.farley, I agree with your observation as well. Rather than seeing myself as dual-nature, I feel as though I have another “sense” that simply works along with the other five. I am able to use it in conjunction with the other senses in making decisions. Additionally in my experience, my thoughts seem far from being a sensor “in another room”. They are very much connected to things in the physical world. Seeing a cat, for example, triggers complex thoughts of warmth, purring, friendship, etc. The thoughts are very much connected to what I sensed with my eyes. This also matches the actual circuitry – signals coming from external sensors and lower regions in the hierarchy are affecting the same networks of cells as signals from parallel regions. As such, there is a temporal correlation between these signals.


A little bit off-topic because it’s not about consciousness at all, but it does discuss intelligence from an intriguing point of view (especially because it argues certain assumptions that seem to undermine HTM) :

One takeaway is that in the quest for AGI we will inevitably stop copying biology and fork from it by creating an extension for our intelligence instead of a standalone intelligence that we would bid to magically solve our problems.

Even if we did not stop and we created human level like intelligence, we wouldn’t be able to make it useful for the same ethical reasons we can’t use human slaves. And instead of having limited control over it, why not just hire humans, it’d be much cheaper.

1 Like