What do we need, in your opinion, to have consciousness besides intelligence?
I think the problem of consciousness is somewhat a problem of ambiguous definitions. Ask 12 researchers what consciousness entails and you’ll get 12 different answers. Just read the wikipedia page on consciousness to get thoroughly confused. I don’t think we know enough about consciousness to ask the right questions yet. But I do think that HTM is the right direction to get there. I am not certain HTM will naturally lead to consciousness without modeling more of the brain (perhaps sub-synaptic structures like microtubules, which seem to play some role).
One of the most quantitative theories of consciousness right now is Giulio Tononi’s integrated information theory. It assigns a quantity that measures how “interconnected and informative” the components of a system are, in some sense. In that theory, systems have consciousness in proportion to the size of their maximally irreducible informational structures.
Unfortunately all we can do right now is guess: we could correlate reports of consciousness with measures of this quantity, but it’s currently intractable to compute for any nontrivial system. But if his theory is right, then consciousness is a scalar quantity and intelligence is not really a requirement at all: a tree is conscious as well, just not as highly so.
That said, you could conjecture perhaps that the most informationally integrated systems are likely to be the intelligent ones.
That difficulty I feel comes from the attempt to qualify consciousness using some kind of abstract theory for what it is. I come down on the side of neo-philosophers and modern human-potential leaders who characterize consciousness in terms of its experiential impact.
Said another way, what is one’s experience of being conscious? Because establishing its objective nature I think is a fool’s errand.
Borrowing from that, it is said that there are different domains of “knowing” or experiencing being (i.e. consciousness). The psychological domain is metaphorically akin to the announcers and spectators in a sports arena, assessing and describing the thing itself. It is talk “about” the thing, describing it and characterizing it - assessing its historical impact and coming up with interesting analysis of the thing - all of which has absolutely zero impact on the thing itself…
It exists in the domain of an interpreter giving an account.
Then there is the experiential domain. This is where the thing is actually occurring. The “actual” experience of being conscious.
Now be careful because that domain is distinct from memories of that domain. The instant one tries to communicate that experience - that accounting is not the experience itself, but the memory of that experience. And the two things are distinct!
So anyway, I would say that things like awareness of bodily sensation, our internal always happening inner dialog, etc. are accurate components of consciousness…
Interesting topic, and I agree with the thoughts thus far posted.
As it may be difficult to come to agreement on what “C” really is by definition, it may be valuable to
discuss what is the next step above Intelligence. Trees may be highly intelligently programmed in their
area of expertise; stay alive, reproduce, die ; but that system seems to have a limited ability to learn, experiment,
or otherwise develop beyond what it is. ( I’m not dissing trees )
Of all the known species, very few approach mans capability to extend our own abilities. When faced with
a problem we cannot solve, we generally don’t just kick out an error message. We kick the bees nest when
we are kids, we throw rocks even when we know we shouldn’t, we apply past similar scenarios to present problems,
and attempt to step forward; ( i.e. learning ) yes, mistakes are part of that process.
It does appear to be based upon some level of intelligence, as tree roots will try to grow around cement walls in the search of water; but that would appear to be the tree just applying More of plan A; instead of trying a new plan B.
Just ramblings, on an early Thursday morning…
Consciousness appears to be a phenomena involving self-awareness or introspective functions where there is an ability to self-correct. In other words, inspection of it’s own internal detection results for anomalies. A sort of feedback system.
In Numenta terms, consciousness (or something very much like it) will be an emergent result from a system that is able to look for anomalies within it’s own anomaly detection results, and make some sort of adjustment based on those results that can then reduce anomalies within its own anomaly detections. Do this enough times, and see what evolves.
I personally suspect that consciousness is a natural extension of Attention, when abstracted to the Nth degree through hierarchy (similar to recent discussions on how sensory-motor processes might apply to the transformation of ideas in abstract space). Slides from a couple years ago showed Attention to be a function of layer 6 in the cortex (not sure if this is still Numenta’s current theory). Attention is absolutely critical to operate in an environment where senses are bombarded from every direction with a fire-hose of data, allowing to focus in on a specific sub-set of information relevant at a given point in time and “tune out” everything else.
People often place a lot of emphasis on the power of the sub-conscious, perhaps overlooking the enormous power of consciousness. The vast majority of our memories are things that we were conscious of. We usually have to go to great lengths (drugs, hypnosis, etc) to recall memories of even small details that were outside of our consciousness at the time. Since what we are conscious of is also what we are focused on, and is what dominates our memories (i.e. memories of things we were not focused on have been “tuned out”), it seems to me to align with the basic function of Attention, applied to a more abstract space.
From that perspective I believe HTM will naturally lead to consciousness once the areas of Attention and Hierarchy are worked out and resource constraints are overcome to allow modelling enormous networks.
This of course might devolve into a debate of definitions, but I personally would not equate “consciousness” to be the same thing as “self awareness”. You can be conscious of things other than yourself.
I think self awareness is the application of consciousness (or the focusing of attention, as I theorized in my last post) toward one’s self. In other words, self awareness is when we focus our attention on our own internal thoughts, using them as sensory inputs to model abstractions like “myself”, “how I feel”, etc.
Paul, what I spoke of was the experience of being, not self-awareness. Self-awareness is a concept applied to something being observed. The experience of being is just that…, an in the moment; right now experience.
Gotchya. It is easy to misinterpret what someone else means based on subtle differences in definitions.
In my case, the word “experience” is semantically very similar to “awareness” in the context of this conversation. So “experience of being” ends up in an SDR that is very similar to “awareness of being”, which has a lot of overlapping bits with “self awareness”
People can’t agree on what consciousness is. So we can’t really answer a question like this in a way that we can hope to get some form of consensus. I, however, will share what I believe to be true.
I do know exactly what consciousness is. It’s a simple illusion of the perception system that leaves people clueless about the nature of their internal brain signals (our private thoughts and memories). This confusion about the nature of our own internal brain signals is what causes this endless debate over consciousness.
To duplicate human level performance in our systems we will need to duplicate the same powers of classification the brain uses and the side effect will be an AI that has this same type of perception confusion as humans have – they will be “conscious” in the same way humans are.
The illusion is simple. The foundation of human perception is what HTM calls “pooling”. It’s associative learning. Associative learning is the foundation of how humans “understand” the world they live in. We understand that a large collection of different sensory signals all represent a “cat” because the perception system associates all those different sensory patterns into the same category. How the brain’s perception system associates signals together form the foundation of how we understand the world we live in. A sound signal that includes a cat “meow” sound and a visual signal of a cat’s face become associated into the category of “cat”, by the perception system (pooled into a shared representation) and this allows us to respond to a sound of a cat, or the image of cat in the same way – we act as if there is a cat near us. We “understand” there is a cat in the room whether we hear it, or see it. All because the brain associates these sensory patterns as being in the same category.
Our brain’s perception system works so well, that we learn to trust it. Whatever it “tells” us about the state of the environment we learn to trust as a foundational truth about the environment we live in. Most people have great trouble separating their brain’s truth, from the environment’s truth. People hear voices in their brain and think god is talking to them because they can’t grasp that the voice is coming from their own brain. Illusions, and delusions (strong illusions) are very confusing to people. And this lack of ability to separate truth, from illusion, is why society is so confused about consciousness.
When the brain classifies a rock, or stick on the ground as a “cat” we believe we saw a cat, even though there was no cat there, Our brain will mislead us when we are not able to recognize the difference between illusion, and truth. It happens at all levels.
The brain’s internal signals, our private thoughts, signals generated IN the brain, not from an external sensory signal, are analyzed by the same perception system that analyzes sensory data. And it TRYS to find associations to build categories like “cat”. But it fails to find associations because there are none. A cat creates associations between the audio meow and visual cat-face because they happen at the same time – they are pooled into the same category because they happen at the same time. But what is there to pool, between an internal “thought” of a cat, and cats in the real world? Do internal thoughts of cats only happen when there is a real cat in the sensory data streams? They do not. These internally generated “behavior” signals are independent in time from what’s happening in the external environment sensed by the eyes and ears. So there is nothing to associate.
If our eyes could “see” our neurons firing in our head, that would create an association. When we “looked” at our neurons, we could see them fire, at the same time the signals in the brain were generated. The perception system would then associate the internal signal, with the visual data of “seeing” the neuron fire, and the two would merge into one, as being the “same” in our view of reality. Our “thoughts” would become “the firing of neurons” we can see with our eyes.
But we can not see our neurons fire. So there is nothing for the low-level perception system to associate. This means humans end up with no understanding of what our private thoughts ARE. They exist with NO ASSCOATIONS with the physical events we can detect with our eyes, and ears, and fingers. This gives us all a default understanding of private thoughts as “having no known physical associations”.
Private thoughts are physical events. But we can only naturally understand those physical properties of the brain is able to find associations between the signals of our private thoughts and the data from our eyes and ears. And it can’t. So we are left with a foundational understanding of reality, that thought has no physical properties – that they exist . in a category sperate from all the things that do have physical properties (all we can sense with our eyes and ears and fingers) – what we call the physical world.
This classification that thoughts have no physical properties is just an illusion. It’s an error of the perception system to accurately “understand” the physical properties of thought. Only because the physical properties of thoughts, can’t be detected with our eyes or ears or fingers. It’s just a lack of the needed sensory data. It’s just an accidental magic trick.
But this illusion causes massive confusions over what we are as humans. We self-identify with the parts of the body we can sense with our eyes and ears, but we also self-identify with the thoughts in our brain. Our self is both these things. But by this accidental magic trick, our perception system is telling us that it can’t find any associations between our “thoughts” and our “body”. Our low-level perception system, by how it classifies data, is telling us, that our thoughts are not physical, and that humans are dualistic creatures with a physical body and non-physical thoughts.
This separation of thoughts and body is the foundation of all the confusion over consciousness. It’s what makes people “feel” as if they have a self that is “inside” the body, but yet magically “separate” from the body. Consciousness is the word people use to talk about the fact that they sense they have a non-physical “self” trapped inside a physical body. They are “conscious” because their non-physical self is “aware of” the physical body – and “aware” that the “self” is separate from the body.
But this separation is all just an illusion – an accidental magic trick the low-level association learning perception system has made while trying to identify correlations between the internally generated “thought” signals and the ext3ernal sensory signals. It leaves everyone believing they are “more” than just a physical body, when in fact, we are nothing more than just a physical body with waving arms, flapping lips, and firing neurons.
Once you understand that this is an illusion, and why it exists, then it stops being an illusion. I have not thought of myself in dualistic terms in a decade. I’m just a hunk of meat. Not a hunk of meat with a soul.
But most people I explain this to, are unable to understand what I’m talking about. I understand what consciousness is and why this confusion exists. You can too. But most people I explain this to are still lost and confused.
In On Intelligence, Jeff explains consciousness in this same exact way.
A quote from it, “the cortex has no ability to model the brain itself because there are no sense in the brain. Thus we can see why our thoughts appear independent of our bodies, why it feels like we have an independent mind or soul. The cortex builds a model of your body but it can’t build a model of the brain itself. Your thoughts, which are located in the brain, are physically separate from the body and the rest of the world. Mind is independent of body, but not of brain.”
Does the illusion really disappear for you? It seems too natural to actually dispel the feeling entirely. Like, I understand how it works, but I do not lose the disconnected-thoughts sensation. I’m not convinced it can be removed from human nature.
I also still have difficulty reminding myself that my perception of reality is not the real world. Like, the objects in my brain do not actually exist as anything more than representational icons, in a Don Hoffman sense. I’m still trying to wrap by mind around that idea.
What’s disappeared for me is the dualistic view of self. I no longer think of myself as some sort of “thing” inside a body, I just think of myself as a body made up of many parts. I don’t self-identify with the voice in the head as “me” any different than I self-identify with my hand. I don’t think of the voice in my head as different than my voice when I speak, except my voice when I speak is the air vibrating, and the ear translating that to neural signals, and the brain translates it to concept signals. But when I speak in my head, it’s just part of the same physical chain of actions but only the brain part.
I do, as you point out, still naturally think of my voice in my head as different from my external voice, but not different in the sense of it being non-physical. mostly, I just don’t think about it one way or the other. The key difference is that I don’t think of myself as being inside a physical body.
I have always been a strong materialists, but one day, about 10 years back, I was in the shower thinking about AI, when all of a sudden I recognized that the way I thought of myself, was different than how I thought of a machine like a robot. I thought of myself as “inhabiting” a physical body. I had never NOTICED I was doing that! It was something that was built into me that I was thinking that way about myself without ever giving it a second thought that I was doing it. Once I realized I was automatically thinking of myself as this dualistic being, and figured out why and how this was happening, I re-trained myself to stop thinking that way. It took me about a month or so to do this. It was like learning to walk again, but in this case, learning to see myself as I really am, not as how my perception system had tricked me into thinking about myself.
It’s much like how we talk about software not being hardware. We are taught to talk about software as if it is some magical non-physical part of a computer that exists in this non-real “virtual” world of computers. But software IS hardware. It exists as the parts of the hardware that can be changed to reprogram the computer. But we naturally fall into this dualistic view that software is not physical because it’s a direct parallel to how we think about ourselves – as the “information” in our brain not being physical. The more I examed all the ways I thought about the world, the more I found example after example of how we had projected this false idea of dualism, out to the real world, even though it’s all invalid. We talk as if all information is non-physical when it’s all just a physical process. The error is everyone in our language and they way we have been taught by society to talk, and think about the world. And it’s all highly misleading.
I’ve erased most of those dualistic forms of thinking from my understanding of reality and I’m constantly on the lookout for more of the same errors. I see myself, and the world around me as the physical world it is, instead of all these projections of dualism into our own self, as well as the world outside. I see the world very differently now that I’ve “fixed” many of these errors in my thinking – all of which I believe have grown out of this fundamental perception error the brain makes about its own neurons firing.
"Your thoughts, which are located in the brain, are physically separate from the body and the rest of the world. Mind is independent of body, but not of brain.”
That’s interesting. But…
Well, no, he’s not saying the same thing there. But it’s similar. It’s a close parallel. If the brain didn’t have any sensors, how do we “know” we are having a thought? Every neuron in the brain is a type of sensor, Neurons “sense” each other firing, Just like some special types of neurons in the eye, “sense” light, and other neurons in the skin sense pressure. The brain can most certainly sense it’s own activity. This is not a problem of a LACK of sensors. It’s a problem of the brain, not being able to associate the firing of our skin neurons, and eye neurons, and ear neurons, with the firing of brain neurons that ARE our private thoughts.
It’s not a “sense” problem, it’s a classification problem. It’s not a lack of a model, it’s the problem of the brain building a FALSE dualistic. model of its own brain activity. When a dog barks, our brain builds a model that merges the vision of a dog, and the sound of a dog, into one concept – the concept of “dog”. That concept is activated when we hear a dog bark, or when we see a dog. It’s the same concept because the brain “pooled” these different patterns together through associations.
As a parallel with a dog barking, our thoughts are the firing of neurons in our brain (the barking of neurons if we want to stretch this idea), but the low-level perception system fails to build a model to reflect this. It never combines the idea of “neuron firing” (the words we hear that describe that or the visual image of a neuron), with what our brain can sense is happening in our brain (our private thoughts). The brain’s model keeps these two concepts as eparat3e (as different things) when in the real world, they are the same thing. And when the brain fails to merge these patterns into one, we get a false dualistic model of reality instead of a monist model which accurately describes the world we live in.
And I do not claim our thoughts are separate from our body. That’s the error of dualism – the error that results from the brain’s illusion – the bad model it builds. I claim our thoughts ARE the firing of neurons, and nothing more. To try and claim that our thoughts are separate from the brain, is to try and argue that neurons aren’t part of our brain.
From this one quote, I don’t think Jeff is saying the same thing, but I"d have to go get my copy of the book and read the larger context to be sure.
I think it may seem like a different idea to you since you are defining sensor so generically, whereas normally it refers to low-level, subcortical sensory organs. The brain lacks such a system for itself, which would be required for this association to occur.
The neurons sense each other firing, but only in the context of the sensory input they are receiving from the environment through the periphery.
It certainly would be nice to innately know exactly which neurons are firing in our brains.
I don’t think that claim was ever made. In fact, the quote says mind is independent of body, but not of brain. Body and brain are being made distinct since the model is in the brain, but of course they are both physical parts of a person. Dualism is certainly not suggested. Thoughts don’t exist in your hand, your foot, your shoulder, or anywhere else but the brain, so it’s a justified distinction imo.
Yeah, those are valid points…
Jeff is saying “the cortex has no ability to model the brain itself because there are no sense in the brain…”
This is certainly very close to the same thing I’m saying. We are both saying that since the organs we use to sense the physical world can’t sense the activity of our neurons, the brain ends up building an invalid model of itself, and that modeling error is the cause of all the confusion about consciousness. So we are both thinking the same thing at that level.
But I take that idea a step further by being specific about what the modeling error is. In that quote, Jeff is implying the brain has no data at all about itself (can’t sense the thoughts because there are no sensors there), and that this lack of data is why the model is invalid.
But if it were true that the brain couldn’t sense it’s own thoughts, then we would not be aware of ourselves having thoughts. We could not talk about what we were thinking. The fact that we can say something like “I was just thinking about how to solve that puzzle”, means we can, in fact, SENSE our own thoughts. And if we can sense and react to, our own thoughts, there must be some type of SENSOR that makes this work. And sure enough, there is. Neurons are sensors. They sense the firing of other neurons.
This is not a stretch of reality to use “sensor” in this way even though it’s not common. Every sensor in our body is a neuron I believe. Is there a single sensor in our body that is not a neuron? I’m no expert on human physically by any means but I’m not aware of any sensor that is not some type of neuron. All neurons respond to some stimulus and are triggered by that stimulus into creating a spike. Rods and cones are trigged by light. Hair cell neurons in the ear are trigged by physical motion. Neurons in the brain are trigged by the activity of other neurons – they are “neuron sensors”.
The brain has no problem at all “sensing” the fact that lots of neurons are firing (we are having thoughts). The “data” of this activity is mixed up with the data from our external sensors – all the data is in the brain. There is no lack of sense and no lack of data that represents our private thoughts.
And the brain most certainly DOES build models of the brain. It has no problem at all modeling the data that represents dogs, and cats, and private thoughts of dogs and cats. If the brain didn’t build models of this, we wouldn’t be able to understand what the words “private thoughts” means. So to say the brain can not model the mind because it doesn’t have sensors is totally invalid in my opinion. It’s just wrong.
But what’s valid, is to understand that it is building a false model.
There is only one type of model the brain builds. It’s a temporal association model. All our models of reality are association models. It’s what allows us to understand the 3D nature of the world for example. We understand the 3D world in terms of how the world changes over time. If we rotate a cube, the image of the cube changes over time, and the brain associates the sequence of patterns and that sequence of patterns (the what is expected to come next information) is what gives us an “understanding” of the 3D nature of our environment. It’s all stored, and encoded, in associations (spatial/temporal pooling in the HTM models).
What the brain can not find, is any associations between our thoughts and the external world – due to the fact that the eyes can’t see neurons, and neurons that can sense thoughts, can’t see. We don’t lack sensors, the sensory scope of our sensors are isolated, like having our eyes in one room, and our ears in a different room, so that what the ears were hearing, could not be correlated with what the eyes are seeing.
We have non-overlapping sensory domains that prevent the brain from understanding how the two sensory domains connect. There is no sensory data correlation for the brain to work with, since the sensory domains of the external sensors, and our “thought sensors” don’t sense a common part of the physical world.
The lack of physical overlap in the domain of the sensors causes the brain to build a model that has no overlap – which leaves our understanding of the physical world, isolated from our understanding of our mental world. We can sense both, we are aware both exist because both our being sensed, and analyzed for patterns, but the brain never gets to experience overlap.
The brain is accurately modeling the data it has access to, but with a lack of data, the brain has no ability to show how our thoughts are connected to the physical world. When we have a thought, where in the physical world does it exist? We know from training it’s happening in our head, but without that training where would we think a “thought” existed? We would have no idea if the thought was in our head, or our feet, or under some rock.
If we had eyes located in one room, and the eyes could never leave, and ears in another, we would have awareness of the sight and sounds, but if the sounds being picked up by the ears never correlated with anything the eyes were seeing, how would the brain ever learn how they were connected? What if the ears were just in the next room, through door #3, but yet the door never opened, and the eyes never saw anything in its room, that correlated to the sounds the ears were hearing (we must assume very sound proof rooms here). The brain would be aware of the sounds of one room, and aware of the visual activity of the second, but could never build a model of how these two rooms were associated. The brain wouldn’t know, that the ears were located through door 3 of the first room. This lack of knowledge all comes from the fact that the sensory domain of the ears and eyes in this example don’t overlap.
The sensory domain of our ears and eyes do overlap, however. Which is why when a dog barks, and we can watch the dog bark at the same time we hear the dog, the brain solves the data binding problem by creating an internal signal to represent “dog bark” that is activated by either the sound pattern or the visual pattern. The brain wires itself to represent the idea that these two very different data patterns represent “the same thing”. If the brain doesn’t t3el us two things are “the same thing” then we understand them as “two DIFFERENT things”.
That’s what the problem with our mental activity and all the things that we sense with our ears and eyes and fingers. They don’t overlap, so the brain classifies them as “not the same thing” by failing to merge them into a common signal. So the model the brain builds is one where thoughts have no physical properties, and where there are no thoughts, in the physical world (the eyes never see them). The model is one where mental activity is non-physical.
But at the same time, we associate this mental activity as our self, so we all automatically end up with a dualistic understanding of the world we live in, where we have this non-physical part of us (our soul). We automatically divide our thoughts and our body, into two very very very different categories because of this.
@curt, to change from your main point slightly, I am curious to hear your thoughts on another phenomenon that comes with consciousness. In particular it is the sense that “this person” is very different from any other person. This is not just limited to thoughts, but everything, including the senses. This, of course, is because “I” am the one sensing them.
This thought then leads to the terrifying realization that “I” will inevitably cease to exist. Completely and utterly, as if I had never existed in the first place. I can get a very slightest sense of this reality when I try to recall the time periods during which I was asleep.
This then leads to a rather confounding, unanswerable question, “Why then is ‘this person’ so very different from any other person that has ever existed or (as far as I know) ever will exist”?
This is possibly getting a little off topic, but it does cut at the core of one of the cruel realities of consciousness (one that I tend not to like to think about…)
Couldn’t have explained it better. This illusion of the ego (of a thinker behind the thought) is deeply ingrained in our western philosophy, culture and religion and that’s why it’s so hard to acknowledge.
But looking at eastern philosophy, we can see that they don’t have such confusion over consciousness and the soul. I highly recommend reading Alan Watts or listening to his lectures on YouTube, he is regarded as the brightest contemporary philosopher to have put eastern thinking into western terms that we can understand.
It is nonetheless interesting to think that western culture is the one that developed the science and technology that allowed an analytical study of the brain which already seems to point towards a similar understanding. Perhaps the illusion was useful in this regard?
Even abandoning the illusion, I personally don’t find that consciousness is any less magical. You can think of yourself as ‘just’ a bunch of meat and you’d not be technically mistaken, but isn’t it a big deal that this meat is able to project the external universe onto itself, and that anything exists at all? After all, you can’t deny the fact that there exists an illusion…
Now of course, even if the HTM AI will eventually emulate the full complexity of the cortex, it would probably not feel conscious in the same way a human (or even animal) does because the perception of the world is strongly influenced by the lower brain (what we know as instincts / feelings such as fear, love, hate, etc.), so the AI would need to build on top of a reptilian brain to achieve the same thing.
" Thoughts don’t exist in your hand, your foot, your shoulder, or anywhere else but the brain, so it’s a justified distinction imo."
Well, no. It’s a weak holdover from strong dualism to do that. Your point that he didn’t argue dualism is totally fine, but I’m making a more subtle point here. This dualistic error the brain creates this false model really keeps raising its head in our thinking even after we believe we have “fixed” it.
First, the entire mental activity of our central nervous system extends throughout our entire body. It’s not just in the brain. There is processing happening in every sensor of our skin, and eyes, and ears and throughout our body. So to try to argue it’s only in our brain is not accurate.
“thinking” is just neural activity. To suggest neural activity only happens in the brain is just wrong. There isn’t one part of the brain that is doing “thinking” and the rest is doing something else. The entire system is just one big set of neural relays that start at external sensors and run to the muscles. All that neural activity is “thinking”. Not just part of it.
When I look around and see the table in front of me, there is data running through parts of my brain that represents that table. When I close my eyes, and “think” about the table that is in front of me, there is data running through some of the same parts of the brain that represent “my kitchen table”. It’s not a different part of the brain being used to “think about” the table, than the one that “knows I’m looking at the table”. But yet, this dualistic model the brain builds is able to isolate some patterns of CNS activity as “seeing the table” and others patterns as “thinking about the table”.
The dualism is between two types of data patterns IN THE CNS. Not a dualism between the physical world and “our thoughts”. But the dualism in the data is what gives people a belief that the universe the data comes from, is dualistic.
But there are no physical lines between the “thought” part of the central nervous system (the mind) and the sensing part of the CNS.
There is no dualism in our body, or in the brain, or anywhere. It’s only a dualism in the MODEL that the brain has created.
So to try and talk about the physical world as if it were actually dualistic (the mind is in the brain, but not the body), is still a dualistic error.
The brain is a unique part of the body we can point to, and cut out, and talk about, but the mind is a fiction of the brain’s model. It doesn’t exist separate from the body as “the brain” in any real sense. To try to talk like that is to try and justify the dualism error, by pointing to physical things that really are not what the dualism error is labeling.
The brain is in a data processing loop with the environment. It only works correctly, when it’s connected to the environment. A good bit of the processing that makes up intelligent behavior, is happening in the environment, not just inside our body and certainly not just in the brain. My thinking is very much in a loop with your brain, and with the web servers that process these messages between us – that’s all part of my ability to “think”. My thoughts most certainly don’t just happen in my brain.
But this invalid dualistic model the brain builds, makes us think there is a world called our “mental activity” that is isolated from the physical world. By trying to declare the isolation is “in the brain” you aren’t freeing yourself from the error of the model, you are doubling down on the error, and just changing the names to minimize cognitive dissonance with our materialistic models of reality.
The data processing of us humans happens in a loop with the environment and a loop that passes a lot of the data through the neurons of the brain. But there is no “I” hidden in that loop. That loop is not the “Self”. The only “self” there is, is the ENTIRE BODY and all the parts. To identify that data processing loop as some special version of “us”, is logically as silly as making the claim the the blood circulating in our veins is the real “me”. To pick out one part of the behavior of the body and claim “that is me” is just illogical.
The whole body is me, and that’s the only logical version of “me”. Though we could argue for a larger version of “me”, we can’t really argue that only one part of the body is “me”. But yet, it’s a commonly held mistake made by most people, to think of themselves as having some version of “me” that is not the body. And this mistake, all comes from this modeling error. Just moving me from the modeling error to the “I am a brain” is not fixing the modeling error, it’s just extending the modeling error to the brain.
There’s lots of hype about mind uploads where we transfer our “mind” to a computer and/or robot body and live forever. This is total nonsense and it’s as stupid as believing in heaven. It’s based on the false premise that there is some version of “I” that is separate from the body. And there just isn’t ANY version of “I” that is separate from the body. To believe there is is just failing to escape the modeling error that makes people believe they have a soul that goes to heaven after death. It’s all absurdly illogical.
If we build a computer that models the processing of MY brain, it won’t be “me” any more than if we build a statue of me out of granite.
So you wrote: “so it’s a justified distinction imo.” – so I don’t think it’s justified. To me, you (or Jeff) are just projecting the dualism error onto the physical world instead of escaping from the dualism error of our brain’s false model.