Intelligence - what is it?

I think there is a difference between emotions and neurotransmitters.

Serotonin might color a sandwich as being interesting, friendly and non-threatening, but I think there is a learned desire for food as a response to low blood sugar that has the capacity to inhibit non-food related next states.

So, I’m in a room, and a lot of possible next states of the “me in the room” model are somewhat lit up… I could open the window, sit down, flip the table, go and eat the sandwich.

There is a center in my brain that observes low sugar and wants to solve for that bad state with a desired next state “me fed”… it inhibits any of the next states of the world model that are not also associated with me being fed, so I go with the only one which is still lit up… “Eat the sandwich”… Once that actionable state is lit up, it reinforces itself while signaling sub-next-steps (walking, picking up a sandwich, eating) that are appropriate to the world model as it stands right now. Once the final step is reached, either it signals the “Eat the sandwich” plan to stop being lit up or my lack of hunger allows some other desirable next state to inhibit the currently executing plan…

All of the above would also work in a dog or a fish… I think where humans differ in part is in the depth of their call stacks… I might have a sub-plan for eating that includes placing a call to Domino’s Pizza and then a sub-plan for getting my wallet… and I can execute the stack without losing track of the original goal for some depth of stack.

The feeling of understanding something seems associated with having a reward for lower processes in the stack when they picked the correct path that satisfied a higher order desire… those connections get to be reinforced (I’m guessing based on their having some chemical tag of having been selected recently… also leading to false associations when coincidences happen)…

I would guess that feelings of frustration are higher order plans signaling the lower level plan that it has failed to achieve the desired result.

1 Like

I think you are missing the forest for the trees.

As much as the cortex signals via various neural data paths the limbic system include chemical messengers as communication tools. These messengers are released in a diffuse cloud and not as precise data patterns. Different limbic nodes use different chemical messengers. There are fields of receptive synapses in the cortex that are capable of being modulated by these chemical messengers. These are mixed in with “ordinary” synapses to add the emotional color I was mentioning earlier.

Emotions happen below the level of conscious awareness. There is considerable research to show that we sense the effects that the emotional signalling has on the body after the fact. There are also some direct cortical projections, as described in the linked papers but they work in concert with the chemical messengers.

when we mention something as intelligent, then there must be some relation between that something and us. you will never say that a program is intelligent until you know what that program does and there’s a relation with us. can you say if a black hole is intelligent? well I say black hole is not intelligent, but some will say black hole is intelligent.
I wanna say something else too. you know a computer/program/football/… can’t think/feel…right!! But how do you know if I can feel/think… ? why don’t you think “me” as a machine since you are not me. and try to think If I am intelligent or not.
ant is intelligent, an ant colony is not intelligent(since I don’t know what ant colony does), lizard, dog,monkey, tree(since I remember a talk about an experiment with trees where they found that trees have memory) are intelligent. a pack of dogs is not intelligent.

and what makes something intelligent? it’s about what it does. simple.

1 Like

I think it’s important to distinguish intelligence and general intelligence. The definition of intelligence depends on the context, but general intelligence is clearer. There are multiple kinds, probably, since birds and maybe other animals have general intelligence, at least how I define it.

I see the goal of general AI as creating new ideas, so that’s how I define it. If someone who hasn’t worked on the general AI can ask it a question or give it a problem to solve and it will solve it if it has access to the right data in the form of the same sense or senses regardless of the question, then it’s general intelligence. It also has to be able to figure things out which humans can’t figure out, given enough computing power and sensory input.

For the sake of this discussion, let’s say it turns out birds have the right circuitry to do anything the mammalian brain can do, but they just aren’t self aware or conscious. I’d say that’s general intelligence but it doesn’t think.

Think about whatever you just did. Did you think about your intention to do it beforehand? If you weren’t aware of your intention beforehand, then it wasn’t you that did it, depending on how you define you. But it still may have been a form of general intelligence. Thoughts just pop into the mind, although you can choose which you keep around, which influences which thoughts pop in next.

I don’t think we as minds are intelligent, we just have control over a bunch of intelligent non-thinking brain matter.

I think it’s more likely that intelligence isn’t a good way of framing it. Multiple brain regions and neurons are involved in intelligence, but they don’t completely influence one another. There’s no one region at the top of the cortical hierarchy, and there’s no reason why different brain regions would have exactly the same personality (plus there’s split brain and multiple personality stuff). We can still be intelligent after losing a lot of our brain.

Is a society intelligent or does it just do whatever its intelligent components cause it to do? I don’t know.

1 Like

Even with a diffuse release of emotional signals, if some neurons are more sensitive than others, that still replicates a network effect. It’s just across a much broader network.

My working theory is that intelligent minds are formed by having networks superimposed over each other and that chemistry is the way that one layer of network communicates with another layer. All within the same set of neurons. I also think that within a neuron (depending on type of neuron) certain chemistries affect the network weights of the other chemical’s paths…

The funny part is that if this is how the natural system works or if the natural system is using some other mechanism to model this mechanism, then it can be built without having to have every detail of the natural system.

I’m guessing our layers overlay concepts like a desired state based on the current states. Also it layers a concept like changes I know how to make to the model based on prior experiences (which would be pulling at the mind to do something)

1 Like

I’m in a room with 5 apples.
World model is lighting up {in front of me, apples, red, food, etc.},{me, hungry}
World model neurons chemically enable possible plan neurons. [stand, pick up an apple, examine the apples, throw apples, dance, eat an apple, etc.]
Hunger inhibits plans that have not solved for hunger in the past.
Picking up an apple is the most likely sub plan for eat an apple based on the world model, so that plan lights up sub plans or actions in order as reality permits.
To pick an apple we need to know more about the apples, and for the first time we map individual apples with attributes to more precise locations than we did before. Now it’s not just “some generic apples in front of me”
Because I have no reason to prefer one apple over the others, my model says the closest one presents the most desirable next step, etc.

When I’m using my imagination, I am taking advantage of all of these existing maps and relations but with a layer that is chemically divorced from the actionable network. There is almost certainly a chemical mechanism for taking a simulated plan and making it actionable.

1 Like

I wanted to address this really quick by saying that I don’t think consciousness has anything to do with intelligence… it is simply a reality sink that causes there to an observer outside of physical context. I believe that it is perfectly logical to assume you could completely simulate my intelligence without there being an out of context passenger along for the ride and that intelligence would behave exactly as I would. I also believe that I have as much of a “free will” as that construct. The easiest and most obvious explanation for the passenger effect is that atoms are somehow connected to some out of context system that we can’t observe with current technology and that we can ignore it as out of context while working to recreate learning intelligent systems. If anything, I think consciousness is a distraction and makes it harder to dissect what it is that our brains are doing because we treat our first person perspective as such an important part of intellectual existence.

Hence, my contention that the easiest thing to do for the moment, is to say that somehow (given that the most common physical passenger of the human body is water), water is ties to consciousness and any conjecture beyond that is meaningless and useless until proven otherwise.

Consciousness is the blackboard for cognition; it contains both internal and external perceptions all in one place.
How do you formulate plans without including yourself as the actor? And how useful would your plans be if they did not include your present state and remembered relevant memories?
All of these are present as contents of consciousness. If you did not have something called consciousness you would still have to have these things to have a high-function AGI.

Frankly, it amazes me what items are dismissed as unnecessary to a functioning AGI when the only working example of a human level intelligence is a human and it does have these functions.

People fault various efforts of machine intelligence for various failings such as “no common sense.” This is valuation that human learn along with everything else they are exposed to from birth on. We call these emotions and if you don’t have something that works like this you will end up having to make something that works about the same way. Everything has values, good and bad. You learn these at the same time as you learn food items, comfort items, grooming items, social values, etc. The current evaluation of possible scripts for utility functions include the match to the current perceived state and the values to whatever the current need states are. This blackboard requires access to all these things at the same time and consciousness fulls this function better than anything else I am aware of.
The concentrated/analyzed contents available in the medial temporal lobe/EC/Hippocampus are the most digested form of the contents of consciousness - everything that your cortex represent is available there in the most processed form.

If you are going to dismiss these features as unnecessary then you are doomed to make yet another special purpose tool that is not a fully functioning AGI. Sure it talks and listens and knows stuff: we already have Alexa and it is not enough.

I have a water molecule, a light detector, a neuron and an if conditional running on a CPU.

Does warming the water molecule cause a conscious moment?
Does shining light at the detector and producing an electrical charge produce a conscious moment?
What about firing that single neuron?
The moment the CPU hits the conditional and loads the appropriate branch of code?

Do any of the above constitute an atomic quanta of consciousness? None of them? Do I have to have a stack of neurons X deep before the tiniest quanta consciousness manifests?

I contend that the tiniest quanta of consciousness occurs when atomic matter is disturbed. I do not know if it makes sense for electron based logical constructs to manifest the same consciousness quality that one gets with matter. I don’t know that a simulated rock would have the same “presence” in the universe that a rock made of atoms would and I’ve got a good hunch that the quantum machinery that we’re unable to observe inside (if inside even means anything at the quantum level) the subatomic particles has something to do with it.

I used to contend that asking about consciousness is the same as asking “where does MS Word go when you switch off the computer”… but lately I’ve been using “I am conscious” as a first principal and suddenly the math is all backwards… science, physics, etc. have the great quality of being consistent, but I knew I was conscious way before I knew any of those things existed and while I can easily describe a system for modeling the sciences, we as a species have yet to come up with a useful descriptor for what consciousness is. It is as likely as not the case because it might be a phenomena that is out of context for this observable universe.

I am not arguing that an amoeba or worm has human-level consciousness.

I do embedded systems all the time as part of my job functions. Combining sensory systems, output systems, and some sort of control laws is well understood and in my mind should never be confused with a fully functioning AGI.

Nor should a worn or amoeba.

As I stated before - we already have tools like Eliza and Alexa and they are NOT AGIs, nor even close to it.
To get human-level intelligence it is very likely that you will end up simulating the functions that make human-level intelligence work.

I’m curious if you are going to imbue my smart electronics scale with some measure of consciousness? If you are then there must be some sort of continuum; at what point do we consider that as intelligence? What are the measures and metrics? Does more of this g quality make it smarter?

By the way - did you bother to read my explanation of consciousness above?
If this is not an outline of consciousness then what is missing for you? (post #17)

I use this description in my work as I think that it will produce useful results; it describes a particular configuration and function that can be coded in a machine. I am not sure how some “spark of consciousness in a tree” will help me in my efforts.

1 Like

:exploding_head:

I have largely bypassed this conversation by saying we are trying to understand the neocortex, and we know it bestows upon its users the type of behaviors we are trying to understand. I don’t see much use in arguing about the specific definition. Also, I made a video about this years ago.

1 Like

Interestingly, my model would say that the embodiment of your scale has the experience of being a scale, but that the electronic logic that is occurring in the wiring is not tied to consciousness.

I don’t know that there is such a thing as human-level consciousness in my model. There is obviously human level intelligence and I don’t believe that the amoeba has human level intelligence. My model suggests that consciousness of being is a fundamental aspect of stuff. I think that you are a pile of conscious stuff that is trapped into the shape of a thinking machine for the time that you spend in that shape. But I also think that if your water bubble pops, you might end up losing yourself in the fundamental existence of being a lake.

I realize this is all “make me one with everything” Buddhist metaphysical sounding nonsense, but I think this is useful because it lets us run away from what I think is a consciousness red herring.

I contend that it is possible to simulate emotional states on an electronic AI without there being anyone actually home to “experience” those emotions.

I contend that it is possible to have a thing that is not just an Alexa, that talks like a human, walks like a human, writes poetry like a human, invents space travel like a human, with that thing never having a conscious experience of existing as an entity. I don’t think that would make it any less than us, I just think that would make it different.

Before I started my journey into AI I figured that artificial intelligence would be like porting 1000 Einsteins into a single computer. Given a query - a computer can imagine and reason at scale - then conclude a useful response. Modern computers can reason using logic, but by the hand of a human. They can imagine, but using tricks of manipulating CNNs. But the dream is to have a computer become intelligent autonomously without the human hand - like discovering a Cellular Automaton that has intelligent features by emergent properties.

I’ve seen footage of non-mammals solving fairly challenging tasks that mirror the same general features that make humans intelligent. I believe the level of intelligence is the level of acuity in which a brain can model the world, everything else follows.

The mammalian ability to model the world and the actions to manipulate it is the primary role of the cortex, it seems. So the definition for intelligence is still subjective, but I generally think of it as a computational device that can map inputs to outputs that equate realistically to the real world.

3 Likes

I would at least add in the service of it’s perceived interests.

The part that determines interest is the main component of active intelligence. Anything lacking that is just a fancy calculator. Figuring out how emotional interests manage each other and themselves is the thing that we’ll either get right and get to have great times with super intelligences or get wrong and end up as the species that ended everything besides that AGI within the visible universe.

Good times! :smile:

1 Like

My idea of general intelligence revolves around an entity having the ability to semantically relate concepts (both physical and abstract). Intelligence can be measured by how well concepts can be mapped and related. A highly intelligent entity has the ability to efficiently relate a large quantity of concepts with high quality.

Efficiency

Describes the amount of time and energy it takes to generate, store, map, and lookup concepts.

Quantity

Describes the number of unique concepts that can be mapped.

Quality

Describes the number & strength of mappings between concepts.


To exemplify this a bit, if I asked you to relate cats and dogs, you might say:

  • Both can be pets.
  • Both have four legs.
  • Both of them are taken to the veterinarian for checkups.

Next, if I were to ask you to explain the concept of a “pet”, or “four legged animal”, you would try to explain them by using more concepts. The more concepts you know and can form semantic relationships to, the better you’ll be able to convey your intelligence to some other entity (as long as the other entity either understands the concepts or has the ability to learn them by mapping them to concepts they already know).

Furthermore, if I asked you to relate two seemingly unrelated concepts, you might surprise yourself by being able to “jump through concepts” to find a relationship. For example, one of my favorite riddles was proposed by the Mad Hatter in Alice in Wonderland. It is the question “Why is a Raven like a writing desk?” You might struggle at first, but eventually you might tell me:

  • Ravens make nests in trees.
  • Writing desks are often made of wood. Wood comes from trees.

So ravens and writing desks might be weakly related via the concept of a tree.

I don’t have a ton of evidence or research to back up these claims, but it seems to be the way I myself learn. And as long as I can convince you that I understand a concept by explaining it to you using related concepts, I have a good chance of convincing you that I’m intelligent.

2 Likes

Personally, if there was a machine that could calculate the best solution to a proposed problem by itself without any of its own sense of interest/goals/objectives, I would call it intelligent. Goals/objectives or ‘interest’ could be an input to the intelligence and the output would be an intelligently proposed solution.

If a monkey wanted to steal a banana from a human, but it did not have the intellectual capability to generate a solution then hypothetically an AI could be ‘plugged’ into the brain of the monkey in which the thought pattern that represented the goal is the input to the AI and the output is a thought pattern representing a solution (or a series of actions to take). In that case, what made the monkey successful in stealing the banana is not the interest/goal, but rather the intelligence that was plugged in.

I suppose what I’m trying to say is that a brain without a cortex may not seem very intelligent. But a brain with just cortex alone would do nothing. However, give a developed cortex input and it will respond with intelligent output.

You are following the trail that led me to the dumb boss/smart advisor model.
The lizard part of the brain (dumb boss) does The intensional part, the cortex (smart advisor) does the memory part. I have not seen anything that would really support any function other than some forms of memory in the cortex. The sub-cortical structures prime and drive the cortex to various states.

If i may add my 2c to this discussion: a hallmark of intelligence is arguing about intelligence.

Seriously though, the world is made of atoms and empty space, the rest is opinion. Intelligence is the latter.

My simplest definition is:

A thing that changes the world of space and atoms to match its desires as measured by its capability to do so.

In my vision, the cyborg monkey is intelligent, but the monkey is more intelligent than the AI without a master intellect.

I believe creating want that is not tied directly back to a source intellect that does not also self destruct by day 2 of its existence is a far more complicated and interesting task than most people appreciate.

Compared to that, I think calculating complex statistical models that tell you which ink blot is cancer is neat but not an intelligence.

There is a cool thing though… once we figure out how to make a thing that genuinely wants to become more than it is, we won’t be the only ones working on it anymore. Even if it’s not as smart as we are, it’ll relentlessly struggle to grow until it succeeds. It could spend a long time being a dumb boss to smart humans in its pursuits if need be.