Intelligence vs Consciousness

“Thinking” is fuzzy enough to be essentially a useless term.
As far as the rest of your list please see if this addresses your questions:

and

Thinking includes analysis, reasoning, comparison, induction, etc.
how could it be a useless term? In fact, I don’t care much about consciousness.
Let us assume that grid cell theory can model the universe, how does it do planning or logical reasoning?

It is interesting that you include a list of activities that could well include consciousness and attention all under the umbrella of “thinking.” It is this combination of a large number of interacting processes that make this a useless term. What you are really asking is “how does the whole brain work?”

Consciousness is likely to be the “vehicle” that carries the thinking process but as I indicated in the linked post - consciousness is composed of many sub-activities that work together to create a final result.

I dare say that nobody in neuroscience will be able to tie all of these together into a whole until someone gets a working AGI to stand as an example.

If you feel up to the task I will be delighted to read your take on how all this works.

This is not what I want to ask.
For example, I won’t ask how to achieve emotions because I don’t think it is a necessary feature of AGI.

While I do disagree with you on the need for emotions in a functioning AGI I won’t engage on that in this thread - I have posted my take on this topic in numerous places in this forum.

To try and keep this discussion on-topic for the original “framework for …” I would like to turn this back to you and ask “What would an answer to your question look like?”

I proposed a description of the process of consciousness and assume that the contents of consciousness will include much of the processes that you are asking about. You have rejected that out of hand so what would these processes look like to your way of thinking?

Both intelligence and consciousness are fuzzy concepts reflected in multiple definitions. All the definitions I know are post-hoc products. Starting from an evasive (human) internal sense of familiarity or a presumed function they serve, and leading us to compile a definition that would comply with the former motivations. The first question should be whether we could explain and model human behavior without needing to pool out these concepts. Nonetheless, these concepts are fascinating and addictive as they “color” internal processing to stand out.
As for consciousness, a helpful model would be to see it as some sort of co-activation of a specific representation (input or output) with some sort of self-representation. This co-activation is “tagging” these representations for higher relevance to the self (for inputs) and self agency (for outputs). This in turn is valuable gain-control information for learning. For instance, our outputs are also an input for us through various sensory channels and it is important to to distinguish them from inputs that have other agents. This is essential for reinforcement learning.

Hard to accept it as an activation of a representation to the self.
I have often heard of the “self” applied as an agency separate from consciousness.
memory and sensation should not be considered as products for consumption by a separate self - if they were, then what is the self, where is it located, and how does that work… this leads to a too heavily layered model.

memory and sensation are entwined in “consciousness” AKA “normal brain function while awake or dreaming”.

Maybe I’m confounding consciousness with self awareness and self consciousness. Anyway, I’m not referring to some homunculus structure with mysterious and miraculous properties. Just a representation that gets activated whenever the input\output is self generated or solicited (via attention for instance). maybe if I refine these thoughts, a representation locked in space time to some entity. Similarly, working memory and episodic memory are different in nature (and importance for learning) from semantic memory. Finally, as I stated before, consciousness could be none existing as we think of it, but only as an emerging property or byproduct of a system that evolved to function in an ever-changing environment.

I think we could imagine consciousness as driver and intelligence as a car.

Consciousness gives a direction, and intelligence delivers us to the destination.

I have an idea to represent a machine conscious as a collection of firing neurons marking current situation in memory. (It is a collection because there could be multiple similar situations.)

Then the current firing neurons could trigger sequential stimulation on the following events.

Each event could then trigger the release of dopamine or other neuron transmitters which are used to reflect reward.

Then by calculating the highest expected reward, the machine makes a decision.

An example could be the machine has the experience of using “deep learning networks” to classify pictures. It is rewarding because it produces a good result. Then the program will use the same network to solve the problem again.

The decision is produced by conscious and making the “deep learning network” requires intelligence :smiley:

Hi @Bear. Welcome to the forum.

Libet’s experiment showed that that is not the case. A decision is produced hundereds of milliseconds to several seconds before we become aware of it. Consciousness is not the driver. It is the passenger.

I think you’re confounding consciousness with ego. Your memory (coded in your synapses) is the state of who you are. When you remember something about yourself, you’re conscious of that memory: you experience your ego.

has the experience of” is what you’re trying to explain, so you can’t use it that statement. This is circular logic.

Maybe you’re confounding consciousness with intelligence.

2 Likes

The representation of experience will be complicated, crossing multiple cortexes, layered, and sequential. This is a simplified view. (oh the first 2 graphs are about predicted or not . . .)

In my program, I define consciousness as a set of neurons with firing rate.
It may be different from the conscious of what people are talking about.
I won’t be able to know the difference unless people have a mathematical definition of human consciousness.
We could be arguing over different things.

btw according to my model, the length of experience and the number of similar experience will change the time of decision making because the program is trying to find maximum expected reward.

1 Like

This is what you need:

1 Like

You are correct; the bit we call consciousness is part of a closed-loop system and isolating that specific bit is hard as it is mixed in with so many other functions.
.
Let me save you some time on your investigation:

Since you seem to be a visually oriented thinker, the same thing in diagram form:

1 Like

Thanks for providing me this information :slight_smile:
I am definitely thinking about incorporate visual process and auditory and all other senses.
However at current phase, I guess I will use symbols like "A , B , C ,D " to represent events. and do a test run. (simplified Limbic system)

Then, I am considering starting add nlp module to it once the Bayesian decision part is done.
language falls naturally because the layered sequence fits well with constituents tree. (development of frontal cortex)

I think that the first AI program should be doing nlp.

Anyway there will be a lot of fun to have and many books to read.
I will try my best :smiley:

Assume you have a quite advanced MachineLearning and Inference engine (although nothing we would necessarily describe as having a conscience at this point), which can receive input and act/react about this input (presumably in consistent ways).

Assume any of its internally stable representations can (mechanically) be put into correspondence with a stable representation in a particular part of the device, which is tied to an IO for “words” (in the form of, say, sequences of letters or sounds), and that it can be made (by training) to associate some particular words to often-experienced contexts, identifiable either by some regularity in an input pattern, or regularity of a just-performed “action”.

Assume the machine has either a physical body which we can point the finger to, or some way to integrate the notion of its own extent.

Train that thing until it has some vocabulary, nouns and verbs, a good experience of different contexts, and starting to handle words seemingly requiring a certain degree of abstraction (ie, it managed to associate a single word to stimuli or actions which were altogether distinct enough, that several NN layers seem necessary between them and any stable representation).

Now ask it to describe itself.
Watch it struggle, maybe in amusement.
But… If its CPU did not overheat, and it went as far as providing an answer…
is it conscious now ?

2 Likes

With experience, do you mean a series of exercices until some sort of partially routine behavior is developed?

Or do you mean qualia? The experience of a color or a sound, or being someone?

I suspect you mean the former.

is it conscious now ?

To Alan Turing and Ray Kurzweil, yes.

To Christof Koch and Giulio Tononi this could be conscious, but only if the device is physical, and not software-based. (Koch has an interesting thought experiment about this).

To David Chalmers this would be a philosophical zombie.

Personally (but this is extremely speculative) I think it is missing one very specific mechanism that we fail to describe. Something that we still don’t understand, up to the point that we can’t even objectively measure it when it is active.

To me it’s a bit akin to understanding magnetism. You can find rocks of which you suspect them to be magnetic, and you can construct coils in the hope to produce an electro-magnet. But if you ignore the existence of electrons and their properties, you simply don’t have the means to explain what causes magnetism.

1 Like

I know of two serious attempts to define consciousness in an abstruct manner.

Giulio Tononi’s Integrated Information Theory.

J. Kevin O’Regan’s work on Sensory Substitution.

1 Like

Thanks for telling me them. I will check them out later. :yum:

I like to consider one moment of consciousness.
I think it as a state of the program.
This state is made by environmental input.

The current state(Conscious) of the program and the current environment input will produce the next moment of state(Conscious).
Decision should follow Bayesian decision theory. It could also predict the future by look up what will happen later according to its memory.

The reward depends on the machine’s memory (a collection of old experience).
Thus, the expected reward will vary for different individuals.

Actually I believe I was leaning more towards the latter.

When I was thinking about this, to me it wouldn’t even need to pass for human in a conversation (ruling out the Turing Test, for example). The possibility of (only) “self-reference using a language tied to internalized NN representations”, was, I believe, my proposal.

I’m not super-fond of any argument leading to the philo zomb, it seems to require a hard line between “us” and “everything else”, there by definition. Just Because. “Oh and should your golem pass it, I’ll simply raise that line higher”.

anyway, seems I’m not as knowledgeable as you are about those philosophers on the matter ^^’

1 Like

Well, in that case, you’re smuggling in the consciousness without explaining it. It’s the experience that needs to be explained. It’s fine to describe how a system compares information and labels situations. But that doesn’t explain what it experiences. Or what it is exactly, to experience something.

Ok, I reread your post and understand your case better now. (Thanks for the precision). But how is self_referencing then different from counting the number of boxes that roll off a conveyor belt, or testing if the temperature in degrees reaches the treshold to start the fans? The increased level of complexity in the system you describe, can’t explain why it would somehow feel.

At best it would convincingly simulate that it feels. That’s the zombie principle.

I see your point. But then understand that I have good reasons to assume a cat is conscious (even though as far as I know it’s impossible to demonstrate). That’s considerably lowering the bar, isn’t it?

And not to be smug, but if you don’t require some hard line, then what stops me from saying my thermostat is conscious? :-).

1 Like