Intelligence vs Consciousness

In my work yes there two different things. But for intelligence to work it needs
consciousness system in place fist, as support structure/system.

Good read on this topic.

2 Likes

Lizard consciousness is simple one reality being played out.
In better bigger system, have many realities and back up models of realities to fall
back on to. At the last moment blocking neurons unwanted, not correct reality.
How well this is done is the intelligence of the system. Boosting form other realities
to assemble a reality is possible.
In wet ware many must be running in parallel. In silicon many realities could be run
one after another at high speed and select the right one.

But for this to work you need an echo or a memory model.

RE: Alan Watts Reflection ft. Who Am I?:

I think you nailed the key problem… we can never ‘describe’ consciousness because if you really think about it, we cannot describe anything completely. At any level of graininess/abstraction, describing anything completely would be describing everything (including what it is not). To communicate, we can only allude to stuff, and hope (or take for granted) that the receiver has the requisite experiential representations. In other words, could we communicate without any shared experience? ( If not, physics is our only hope with aliens :stuck_out_tongue: )

1 Like

I think intelligence (the ability to learn and create new responses) requires the inclination to respond contextually while informed by memory. Intelligence enjoys creating responses that are synthetic, i.e. more than just remembered behaviors.

The book (On Intelligence by Jeff Hawkins) starts off right by declaring that any sections of the (six layer) cerebral cortex are similar to any other part, and its plasticity even affords that vision (normally experienced in the occipital lobes) can be experienced as a kind of vision in the sensory cortex (behind the central sulcus) such as when a blind person wears a device that transduces video to a tongue mounted display. This requires intelligence, and it also means that any form of mental activity in any piece of neocortex has equal natural physics to any other ongoing mental activity which is the essence of consciousness.

Activity at any part of the cerebral cortex can be measured by electrical field variances - I like to refer to these fleeting bits of electrical activity as mental objects. They can be detected by EEG or Probes, and when a pulse is skillfully applied to a person’s brain in the same spot, the person will remark that the mental object (more or less) is present (a sensory motor type of test in humans)

Conscious activity in the cerebral cortex is the ongoing arising and passing away of mental objects, both via senses, and from formed associative memories - (these are recallable sequences of mental objects having commonality with currently active mental objects).

A natural part of consciousness includes the continuous formation of new recallable sequences.

I would say,

Consciousness is experienced in the brain’s dimension of memory usually forming new memory. Some instances of consciousness are more intelligent than others

.

Good. Basically, from the perspective of information theory, it is the coding of information.
Is there any explanation of thinking, attention and conscious based on this theory? we know that thinking is a necessary feature of AI.
or, is there any explanation of thinking, not related to the neocortex?

“Thinking” is fuzzy enough to be essentially a useless term.
As far as the rest of your list please see if this addresses your questions:

and

Thinking includes analysis, reasoning, comparison, induction, etc.
how could it be a useless term? In fact, I don’t care much about consciousness.
Let us assume that grid cell theory can model the universe, how does it do planning or logical reasoning?

It is interesting that you include a list of activities that could well include consciousness and attention all under the umbrella of “thinking.” It is this combination of a large number of interacting processes that make this a useless term. What you are really asking is “how does the whole brain work?”

Consciousness is likely to be the “vehicle” that carries the thinking process but as I indicated in the linked post - consciousness is composed of many sub-activities that work together to create a final result.

I dare say that nobody in neuroscience will be able to tie all of these together into a whole until someone gets a working AGI to stand as an example.

If you feel up to the task I will be delighted to read your take on how all this works.

This is not what I want to ask.
For example, I won’t ask how to achieve emotions because I don’t think it is a necessary feature of AGI.

While I do disagree with you on the need for emotions in a functioning AGI I won’t engage on that in this thread - I have posted my take on this topic in numerous places in this forum.

To try and keep this discussion on-topic for the original “framework for …” I would like to turn this back to you and ask “What would an answer to your question look like?”

I proposed a description of the process of consciousness and assume that the contents of consciousness will include much of the processes that you are asking about. You have rejected that out of hand so what would these processes look like to your way of thinking?

Both intelligence and consciousness are fuzzy concepts reflected in multiple definitions. All the definitions I know are post-hoc products. Starting from an evasive (human) internal sense of familiarity or a presumed function they serve, and leading us to compile a definition that would comply with the former motivations. The first question should be whether we could explain and model human behavior without needing to pool out these concepts. Nonetheless, these concepts are fascinating and addictive as they “color” internal processing to stand out.
As for consciousness, a helpful model would be to see it as some sort of co-activation of a specific representation (input or output) with some sort of self-representation. This co-activation is “tagging” these representations for higher relevance to the self (for inputs) and self agency (for outputs). This in turn is valuable gain-control information for learning. For instance, our outputs are also an input for us through various sensory channels and it is important to to distinguish them from inputs that have other agents. This is essential for reinforcement learning.

Hard to accept it as an activation of a representation to the self.
I have often heard of the “self” applied as an agency separate from consciousness.
memory and sensation should not be considered as products for consumption by a separate self - if they were, then what is the self, where is it located, and how does that work… this leads to a too heavily layered model.

memory and sensation are entwined in “consciousness” AKA “normal brain function while awake or dreaming”.

Maybe I’m confounding consciousness with self awareness and self consciousness. Anyway, I’m not referring to some homunculus structure with mysterious and miraculous properties. Just a representation that gets activated whenever the input\output is self generated or solicited (via attention for instance). maybe if I refine these thoughts, a representation locked in space time to some entity. Similarly, working memory and episodic memory are different in nature (and importance for learning) from semantic memory. Finally, as I stated before, consciousness could be none existing as we think of it, but only as an emerging property or byproduct of a system that evolved to function in an ever-changing environment.

I think we could imagine consciousness as driver and intelligence as a car.

Consciousness gives a direction, and intelligence delivers us to the destination.

I have an idea to represent a machine conscious as a collection of firing neurons marking current situation in memory. (It is a collection because there could be multiple similar situations.)

Then the current firing neurons could trigger sequential stimulation on the following events.

Each event could then trigger the release of dopamine or other neuron transmitters which are used to reflect reward.

Then by calculating the highest expected reward, the machine makes a decision.

An example could be the machine has the experience of using “deep learning networks” to classify pictures. It is rewarding because it produces a good result. Then the program will use the same network to solve the problem again.

The decision is produced by conscious and making the “deep learning network” requires intelligence :smiley:

Hi @Bear. Welcome to the forum.

Libet’s experiment showed that that is not the case. A decision is produced hundereds of milliseconds to several seconds before we become aware of it. Consciousness is not the driver. It is the passenger.

I think you’re confounding consciousness with ego. Your memory (coded in your synapses) is the state of who you are. When you remember something about yourself, you’re conscious of that memory: you experience your ego.

has the experience of” is what you’re trying to explain, so you can’t use it that statement. This is circular logic.

Maybe you’re confounding consciousness with intelligence.

2 Likes

The representation of experience will be complicated, crossing multiple cortexes, layered, and sequential. This is a simplified view. (oh the first 2 graphs are about predicted or not . . .)

In my program, I define consciousness as a set of neurons with firing rate.
It may be different from the conscious of what people are talking about.
I won’t be able to know the difference unless people have a mathematical definition of human consciousness.
We could be arguing over different things.

btw according to my model, the length of experience and the number of similar experience will change the time of decision making because the program is trying to find maximum expected reward.

1 Like

This is what you need:

1 Like

You are correct; the bit we call consciousness is part of a closed-loop system and isolating that specific bit is hard as it is mixed in with so many other functions.
.
Let me save you some time on your investigation:

Since you seem to be a visually oriented thinker, the same thing in diagram form:

1 Like

Thanks for providing me this information :slight_smile:
I am definitely thinking about incorporate visual process and auditory and all other senses.
However at current phase, I guess I will use symbols like "A , B , C ,D " to represent events. and do a test run. (simplified Limbic system)

Then, I am considering starting add nlp module to it once the Bayesian decision part is done.
language falls naturally because the layered sequence fits well with constituents tree. (development of frontal cortex)

I think that the first AI program should be doing nlp.

Anyway there will be a lot of fun to have and many books to read.
I will try my best :smiley: