Of course intelligence is materially dependent but not a matter itself. After all, intelligence is what you interpret/conclude/sense when you interact with something of different intelligence than yours, otherwise it is not there or anywhere, it is a method of reorganizing the substance.
Seems to be in line with graph Theory when it comes to Computing systems.
I like the idea that consciousness is defined by a contour of connected activity.
You have really enlightening thoughts on this subject, let me first say. But a potential hole in your reasoning is the assumption that there are two independent âtruthsâ of the world; one in our heads and one outside our heads. Experiments in quantum mechanics have proven to us that external reality doesnât exist until itâs measured (perceived). A particleâs past behavior changes based on what we perceive. Based on this theory, external reality itself is an illusion of sorts and exists only when we are looking at it, at least on the atomic scale. These quantum effects are said to be âaveraged outâ when you scale up to the level of classical physics, but they are undoubtedly constantly present nonetheless. So how do you resolve your line of reasoning when it is provably true that conscious perception and external reality are intimately linked and influence each other as opposed to brain constantly trying to model an unwavering external truth.
I posted this earlier but it seems appropriate at this point to point to it again:
and
Iâm really loving everybodyâs two cents on this topic. My take on consciousness is as follows:
I definitely donât buy into any ideas that consciousness comes in a binary conscious or not conscious. I think my dogs are conscious. I think the birds chirping outside my window right now are conscious. Even the jellyfish in the ocean who lack any semblance of a true brain, are conscious of something. Even a simple sunflower, in my eyes, can be said to possess consciousness (Iâll explain in a second). Consciousness to me is not a tangible facet of a system but instead a qualitative assignment that humans project onto systems that are otherwise just going about their business. The recognizability of consciousness is something that arises in animalâs brains and even jellyfishâs nerve nets or sunflowers as a result of their ability to sense and interact with the world and other recognizably conscious beings. This ability comes in various degrees given the complexity of the system performing the action with humans having arguably the most complex of systems that includes a bunch of neural hardware to sense many different things and perform all kinds of manipulations of that information. So, when a sunflower adjusts the angle of itâs flower to follow the sun as it moves across the sky, itâs clearly conscious of the position of the sun in the sky under my definition.
Turingâs famous response to the question âcan machineâs think?â in âComputing Machinery and Intelligenceâ seems in-line with my simple definition. Humanâs donât monopolize consciousness though it appears so. With this perspective, going back to the original question, machines can garner their own semblance of consciousness with the sea of information and logical manipulation available to them. I recently watched 1995âs âGhost in the Shellâ (which, by the way, excellent movie) that eloquently explores this concept in one of itâs monologues. This is spoken by a fictional program that has become sentient after somebody accuses it couldnât possibly be alive because it is only a program:
âIt can also be argued that DNA is nothing more than a program designed to preserve itself. Life has become more complex in the overwhelming sea of information. And life, when organized into species, relies upon genes to be its memory system. So, man is an individual only because of his intangible memory⌠and memory cannot be defined, but it defines mankind. The advent of computers, and the subsequent accumulation of incalculable data has given rise to a new system of memory and thought parallel to your own. Humanity has underestimated the consequences of computerization.â
The experiments may have been misinterpreted:
https://www.paulanlee.com/2017/04/14/consciousness-and-the-misunderstood-observer-effect/
âMisinterpretationâ implies the interpretation is provably incorrect, which is not the case here in either your sources or any other that Iâve ever seen. Itâs a different interpretation that is equivalently possible given current evidence and theory. My original wording was kinda vague. Either way this is probably veering off topic.
I hope you donât mind my balancing things out with QM related information. From my experience itâs only a matter of time before the topic of consciousness leads to quantum weirdness. And cellular processes take advantage of quantum behavior, which is topical in a forum for trying to figure out how (brain and related) cells work.
This is an excellent video I earlier wanted to include, but I was in a rush to get to my day job. I hope you like it too:
I see daisy chains.
Brain images display the beauty and complexity of consciousness:
https://www.newscientist.com/article/mg23431290-400-brain-images-display-the-beauty-and-complexity-of-consciousness/
Without getting deep into the philosophical swamp, I think the practically useful definition for consiousness is simply: âThe ability of a system to represent itself as part of itâs model of the worldâ, i.e it has the concept of âselfâ.
Wouldnât a higher region forwarding an input to L4 in a lower region be indistinguishable from the cortex having sensors (as in the context of the opening post of this thread)?
This is essentially what I am proposing here:
Arenât Intelligence and Consciousness a classification of 2 very different phenomena?
Intelligence is the ability of an entity to acquire adaptability, knowledge and skills to become more proficient with its environment. (An organism doesnât have to have any idea that its acquiring abilities to acquire the abilities, no?)
Consciousness is the ability to assess oneâs own relationship to its reality, environment and consider itself? The ability to have meta-knowledge?
In my work yes there two different things. But for intelligence to work it needs
consciousness system in place fist, as support structure/system.
Good read on this topic.
Lizard consciousness is simple one reality being played out.
In better bigger system, have many realities and back up models of realities to fall
back on to. At the last moment blocking neurons unwanted, not correct reality.
How well this is done is the intelligence of the system. Boosting form other realities
to assemble a reality is possible.
In wet ware many must be running in parallel. In silicon many realities could be run
one after another at high speed and select the right one.
But for this to work you need an echo or a memory model.
RE: Alan Watts Reflection ft. Who Am I?:
I think you nailed the key problem⌠we can never âdescribeâ consciousness because if you really think about it, we cannot describe anything completely. At any level of graininess/abstraction, describing anything completely would be describing everything (including what it is not). To communicate, we can only allude to stuff, and hope (or take for granted) that the receiver has the requisite experiential representations. In other words, could we communicate without any shared experience? ( If not, physics is our only hope with aliens )
I think intelligence (the ability to learn and create new responses) requires the inclination to respond contextually while informed by memory. Intelligence enjoys creating responses that are synthetic, i.e. more than just remembered behaviors.
The book (On Intelligence by Jeff Hawkins) starts off right by declaring that any sections of the (six layer) cerebral cortex are similar to any other part, and its plasticity even affords that vision (normally experienced in the occipital lobes) can be experienced as a kind of vision in the sensory cortex (behind the central sulcus) such as when a blind person wears a device that transduces video to a tongue mounted display. This requires intelligence, and it also means that any form of mental activity in any piece of neocortex has equal natural physics to any other ongoing mental activity which is the essence of consciousness.
Activity at any part of the cerebral cortex can be measured by electrical field variances - I like to refer to these fleeting bits of electrical activity as mental objects. They can be detected by EEG or Probes, and when a pulse is skillfully applied to a personâs brain in the same spot, the person will remark that the mental object (more or less) is present (a sensory motor type of test in humans)
Conscious activity in the cerebral cortex is the ongoing arising and passing away of mental objects, both via senses, and from formed associative memories - (these are recallable sequences of mental objects having commonality with currently active mental objects).
A natural part of consciousness includes the continuous formation of new recallable sequences.
I would say,
Consciousness is experienced in the brainâs dimension of memory usually forming new memory. Some instances of consciousness are more intelligent than others
.
Good. Basically, from the perspective of information theory, it is the coding of information.
Is there any explanation of thinking, attention and conscious based on this theory? we know that thinking is a necessary feature of AI.
or, is there any explanation of thinking, not related to the neocortex?