Intelligence - what is it?

So my wife is highly intelligent?

4 Likes

Absolutely!

As for consciousness and all that silliness, until someone tells me otherwise, I’m just going with EM fields have a first person perspective of the universe… water has a particular form of that because of the way it interacts with itself and we are essentially bags of water walking around isolated from the rest of the environment.

Your conscious self is a water passenger on a massive robot hive mind built by bacteria who were tired of being stomped on by other more advanced bacteria with fancier chemical weapons.

I don’t think that your watery “hive mind” proposal gets me any closer to writing a functioning AGI.

I do have a fairly mechanistic take on how intelligence and consciousness works, outlined below.
And yes - it is also another “basket of attributes” answer without actual detailed mechanisms.

and

Since this is the thread for this sort of thing - what have I gotten wrong in these posts?

I must admit, reluctantly, but honnestly, that understanding something conveys a feeling in me. Some sort of chemical reward. Perhaps a slight dopamine boost, or a minute serotonine high. I feel happy that I found a solution.

I also have to admit that a machine or system without dopamine or other neurotransmitters, probably won’t feel this rewarding feeling.

And I have to admit, that it bothers me. I’ve been thinking about this for the past four hours. And my answer is not very scientific. Perhaps not even rational at all. I am not very comfortable talking in emotional terms.

But here’s the thing: is this rewarding feeling a necessary element of the understanding? Or is it a complementary effect of the mechanistic process that happened in my head while I was understanding the something.

And can a system without chemistry not come to the same situation as me, and start behaving with the newfound logic that we call understanding. Just like me and my chemistry do?

2 Likes

I do think that as much as most newbie AGI researchers would like to pull emotions out of consideration as an un-necessary complication - I see them as a feature and not a bug.


Note the key phrase:
In the Rita Carter book “Mapping the mind” chapter four starts out with Elliot, a man that was unable to feel emotion due to the corresponding emotional response areas being inactivated due to a tumor removal. Without this emotional coloring he was unable to judge anything as good or bad and was unable to select the actions appropriate to the situation. He was otherwise of normal intelligence.

1 Like

Episode 4 of The Brain with David Eagleman included an interview with a woman named Tammy Myers who had an accident and her emotion systems (though still working) became disconnected from her logical systems. Her case hints that even in the most basic situations, emotion may be a necessary component of decision making.

At a grocery story Tammy could explore various options, talk about them, and make logical comparisons. But ultimately she could not chose what to buy. Emotional flavoring allows us to place a value on each option when making a choice, so that a decision can be made.

2 Likes

I think there is a difference between emotions and neurotransmitters.

Serotonin might color a sandwich as being interesting, friendly and non-threatening, but I think there is a learned desire for food as a response to low blood sugar that has the capacity to inhibit non-food related next states.

So, I’m in a room, and a lot of possible next states of the “me in the room” model are somewhat lit up… I could open the window, sit down, flip the table, go and eat the sandwich.

There is a center in my brain that observes low sugar and wants to solve for that bad state with a desired next state “me fed”… it inhibits any of the next states of the world model that are not also associated with me being fed, so I go with the only one which is still lit up… “Eat the sandwich”… Once that actionable state is lit up, it reinforces itself while signaling sub-next-steps (walking, picking up a sandwich, eating) that are appropriate to the world model as it stands right now. Once the final step is reached, either it signals the “Eat the sandwich” plan to stop being lit up or my lack of hunger allows some other desirable next state to inhibit the currently executing plan…

All of the above would also work in a dog or a fish… I think where humans differ in part is in the depth of their call stacks… I might have a sub-plan for eating that includes placing a call to Domino’s Pizza and then a sub-plan for getting my wallet… and I can execute the stack without losing track of the original goal for some depth of stack.

The feeling of understanding something seems associated with having a reward for lower processes in the stack when they picked the correct path that satisfied a higher order desire… those connections get to be reinforced (I’m guessing based on their having some chemical tag of having been selected recently… also leading to false associations when coincidences happen)…

I would guess that feelings of frustration are higher order plans signaling the lower level plan that it has failed to achieve the desired result.

1 Like

I think you are missing the forest for the trees.

As much as the cortex signals via various neural data paths the limbic system include chemical messengers as communication tools. These messengers are released in a diffuse cloud and not as precise data patterns. Different limbic nodes use different chemical messengers. There are fields of receptive synapses in the cortex that are capable of being modulated by these chemical messengers. These are mixed in with “ordinary” synapses to add the emotional color I was mentioning earlier.

Emotions happen below the level of conscious awareness. There is considerable research to show that we sense the effects that the emotional signalling has on the body after the fact. There are also some direct cortical projections, as described in the linked papers but they work in concert with the chemical messengers.

when we mention something as intelligent, then there must be some relation between that something and us. you will never say that a program is intelligent until you know what that program does and there’s a relation with us. can you say if a black hole is intelligent? well I say black hole is not intelligent, but some will say black hole is intelligent.
I wanna say something else too. you know a computer/program/football/… can’t think/feel…right!! But how do you know if I can feel/think… ? why don’t you think “me” as a machine since you are not me. and try to think If I am intelligent or not.
ant is intelligent, an ant colony is not intelligent(since I don’t know what ant colony does), lizard, dog,monkey, tree(since I remember a talk about an experiment with trees where they found that trees have memory) are intelligent. a pack of dogs is not intelligent.

and what makes something intelligent? it’s about what it does. simple.

1 Like

I think it’s important to distinguish intelligence and general intelligence. The definition of intelligence depends on the context, but general intelligence is clearer. There are multiple kinds, probably, since birds and maybe other animals have general intelligence, at least how I define it.

I see the goal of general AI as creating new ideas, so that’s how I define it. If someone who hasn’t worked on the general AI can ask it a question or give it a problem to solve and it will solve it if it has access to the right data in the form of the same sense or senses regardless of the question, then it’s general intelligence. It also has to be able to figure things out which humans can’t figure out, given enough computing power and sensory input.

For the sake of this discussion, let’s say it turns out birds have the right circuitry to do anything the mammalian brain can do, but they just aren’t self aware or conscious. I’d say that’s general intelligence but it doesn’t think.

Think about whatever you just did. Did you think about your intention to do it beforehand? If you weren’t aware of your intention beforehand, then it wasn’t you that did it, depending on how you define you. But it still may have been a form of general intelligence. Thoughts just pop into the mind, although you can choose which you keep around, which influences which thoughts pop in next.

I don’t think we as minds are intelligent, we just have control over a bunch of intelligent non-thinking brain matter.

I think it’s more likely that intelligence isn’t a good way of framing it. Multiple brain regions and neurons are involved in intelligence, but they don’t completely influence one another. There’s no one region at the top of the cortical hierarchy, and there’s no reason why different brain regions would have exactly the same personality (plus there’s split brain and multiple personality stuff). We can still be intelligent after losing a lot of our brain.

Is a society intelligent or does it just do whatever its intelligent components cause it to do? I don’t know.

1 Like

Even with a diffuse release of emotional signals, if some neurons are more sensitive than others, that still replicates a network effect. It’s just across a much broader network.

My working theory is that intelligent minds are formed by having networks superimposed over each other and that chemistry is the way that one layer of network communicates with another layer. All within the same set of neurons. I also think that within a neuron (depending on type of neuron) certain chemistries affect the network weights of the other chemical’s paths…

The funny part is that if this is how the natural system works or if the natural system is using some other mechanism to model this mechanism, then it can be built without having to have every detail of the natural system.

I’m guessing our layers overlay concepts like a desired state based on the current states. Also it layers a concept like changes I know how to make to the model based on prior experiences (which would be pulling at the mind to do something)

1 Like

I’m in a room with 5 apples.
World model is lighting up {in front of me, apples, red, food, etc.},{me, hungry}
World model neurons chemically enable possible plan neurons. [stand, pick up an apple, examine the apples, throw apples, dance, eat an apple, etc.]
Hunger inhibits plans that have not solved for hunger in the past.
Picking up an apple is the most likely sub plan for eat an apple based on the world model, so that plan lights up sub plans or actions in order as reality permits.
To pick an apple we need to know more about the apples, and for the first time we map individual apples with attributes to more precise locations than we did before. Now it’s not just “some generic apples in front of me”
Because I have no reason to prefer one apple over the others, my model says the closest one presents the most desirable next step, etc.

When I’m using my imagination, I am taking advantage of all of these existing maps and relations but with a layer that is chemically divorced from the actionable network. There is almost certainly a chemical mechanism for taking a simulated plan and making it actionable.

1 Like

I wanted to address this really quick by saying that I don’t think consciousness has anything to do with intelligence… it is simply a reality sink that causes there to an observer outside of physical context. I believe that it is perfectly logical to assume you could completely simulate my intelligence without there being an out of context passenger along for the ride and that intelligence would behave exactly as I would. I also believe that I have as much of a “free will” as that construct. The easiest and most obvious explanation for the passenger effect is that atoms are somehow connected to some out of context system that we can’t observe with current technology and that we can ignore it as out of context while working to recreate learning intelligent systems. If anything, I think consciousness is a distraction and makes it harder to dissect what it is that our brains are doing because we treat our first person perspective as such an important part of intellectual existence.

Hence, my contention that the easiest thing to do for the moment, is to say that somehow (given that the most common physical passenger of the human body is water), water is ties to consciousness and any conjecture beyond that is meaningless and useless until proven otherwise.

Consciousness is the blackboard for cognition; it contains both internal and external perceptions all in one place.
How do you formulate plans without including yourself as the actor? And how useful would your plans be if they did not include your present state and remembered relevant memories?
All of these are present as contents of consciousness. If you did not have something called consciousness you would still have to have these things to have a high-function AGI.

Frankly, it amazes me what items are dismissed as unnecessary to a functioning AGI when the only working example of a human level intelligence is a human and it does have these functions.

People fault various efforts of machine intelligence for various failings such as “no common sense.” This is valuation that human learn along with everything else they are exposed to from birth on. We call these emotions and if you don’t have something that works like this you will end up having to make something that works about the same way. Everything has values, good and bad. You learn these at the same time as you learn food items, comfort items, grooming items, social values, etc. The current evaluation of possible scripts for utility functions include the match to the current perceived state and the values to whatever the current need states are. This blackboard requires access to all these things at the same time and consciousness fulls this function better than anything else I am aware of.
The concentrated/analyzed contents available in the medial temporal lobe/EC/Hippocampus are the most digested form of the contents of consciousness - everything that your cortex represent is available there in the most processed form.

If you are going to dismiss these features as unnecessary then you are doomed to make yet another special purpose tool that is not a fully functioning AGI. Sure it talks and listens and knows stuff: we already have Alexa and it is not enough.

I have a water molecule, a light detector, a neuron and an if conditional running on a CPU.

Does warming the water molecule cause a conscious moment?
Does shining light at the detector and producing an electrical charge produce a conscious moment?
What about firing that single neuron?
The moment the CPU hits the conditional and loads the appropriate branch of code?

Do any of the above constitute an atomic quanta of consciousness? None of them? Do I have to have a stack of neurons X deep before the tiniest quanta consciousness manifests?

I contend that the tiniest quanta of consciousness occurs when atomic matter is disturbed. I do not know if it makes sense for electron based logical constructs to manifest the same consciousness quality that one gets with matter. I don’t know that a simulated rock would have the same “presence” in the universe that a rock made of atoms would and I’ve got a good hunch that the quantum machinery that we’re unable to observe inside (if inside even means anything at the quantum level) the subatomic particles has something to do with it.

I used to contend that asking about consciousness is the same as asking “where does MS Word go when you switch off the computer”… but lately I’ve been using “I am conscious” as a first principal and suddenly the math is all backwards… science, physics, etc. have the great quality of being consistent, but I knew I was conscious way before I knew any of those things existed and while I can easily describe a system for modeling the sciences, we as a species have yet to come up with a useful descriptor for what consciousness is. It is as likely as not the case because it might be a phenomena that is out of context for this observable universe.

I am not arguing that an amoeba or worm has human-level consciousness.

I do embedded systems all the time as part of my job functions. Combining sensory systems, output systems, and some sort of control laws is well understood and in my mind should never be confused with a fully functioning AGI.

Nor should a worn or amoeba.

As I stated before - we already have tools like Eliza and Alexa and they are NOT AGIs, nor even close to it.
To get human-level intelligence it is very likely that you will end up simulating the functions that make human-level intelligence work.

I’m curious if you are going to imbue my smart electronics scale with some measure of consciousness? If you are then there must be some sort of continuum; at what point do we consider that as intelligence? What are the measures and metrics? Does more of this g quality make it smarter?

By the way - did you bother to read my explanation of consciousness above?
If this is not an outline of consciousness then what is missing for you? (post #17)

I use this description in my work as I think that it will produce useful results; it describes a particular configuration and function that can be coded in a machine. I am not sure how some “spark of consciousness in a tree” will help me in my efforts.

1 Like

:exploding_head:

I have largely bypassed this conversation by saying we are trying to understand the neocortex, and we know it bestows upon its users the type of behaviors we are trying to understand. I don’t see much use in arguing about the specific definition. Also, I made a video about this years ago.

1 Like

Interestingly, my model would say that the embodiment of your scale has the experience of being a scale, but that the electronic logic that is occurring in the wiring is not tied to consciousness.

I don’t know that there is such a thing as human-level consciousness in my model. There is obviously human level intelligence and I don’t believe that the amoeba has human level intelligence. My model suggests that consciousness of being is a fundamental aspect of stuff. I think that you are a pile of conscious stuff that is trapped into the shape of a thinking machine for the time that you spend in that shape. But I also think that if your water bubble pops, you might end up losing yourself in the fundamental existence of being a lake.

I realize this is all “make me one with everything” Buddhist metaphysical sounding nonsense, but I think this is useful because it lets us run away from what I think is a consciousness red herring.

I contend that it is possible to simulate emotional states on an electronic AI without there being anyone actually home to “experience” those emotions.

I contend that it is possible to have a thing that is not just an Alexa, that talks like a human, walks like a human, writes poetry like a human, invents space travel like a human, with that thing never having a conscious experience of existing as an entity. I don’t think that would make it any less than us, I just think that would make it different.