Intelligence vs Consciousness

I’m not refusing a hard line ^^ just refusing that it is placed exactly onto what otherwise defines human.

I have consciously (:p) avoided defining what it is or how it feels like. Because I honestly don’t know of a definition that would fit. Oh it does feel like something, but my naive description would likely be wrong : Experiment suggest that what one could point out as her thoughts or decisions are actually (externally) identifiable “before that”.

So… i was trying other angles to this problem. First, the machine/human dichotomy may be replaced by a comparison with other animals. Assume a common definition of emotion such that it could be stated and agreed that, say, most mammals “experience” emotions. Okay, so my dog may feel sad.

To me, this is already a very distinct proposition that the one by which a mother chimp communicated “I am sad”.

It was resonating with recent adventures of mine in the domain of fundamental maths where those self-references and meta-loops are at the root of a number of (usually ruled out) evils. And a conveyor box counter thus does not even try.

So… my phrasing “assuming CPU did not overheat” refers to the possibility of the system to avoid falling into infinite loops or logical collapses. And answering. Anything.

The simple fact that it considered itself in a structured language could very well be where i’d put that “hard line” :slight_smile:

1 Like

Of course electrical computers today are not self-aware or conscious, but the “loop” aspect quite like how a computer bootstrap itself and keep running lively.

Electric clock impulses drive the CPU to fetch next instruction per its “program counter” register, thus executing the “main” program instruction by instruction; mean while peripheral devices (including keyboard, mouse, network card etc.) “interrupt” the CPU to execute respective reacting programs (which upon completion, should resume the “main” program). The CPU is dynamically directed to various “sub-programs” per each instruction executed regarding its operand data.

Products of software engineering, i.e. the Operation System, the Compilers, the Interpreters, and various application programs built and run, can be given birth or get destroyed by particularly crafted interactions with the computer, e.g. enter Linux commands into the Terminal window, clone a source repository from Github and build an executable from it, format your hard drive to wipe all data away.

I wonder once a computer is designed to (model its external world) and (keep predicating its next input and making decisions w.r.t. the predications), it’ll get consciousness, and possibly become self-aware once its world-model includes itself.

The “BIOS” and operating system in the brain are the activities of the subcortical systems.
These monitor incoming data streams and body sensors and drive the cortex with need states.

Through blood based chemical signalling and direct neural drive the cortex is oriented and activated in it’s activities.

These subcortex componenets are similar to what one might expect in something like a frog, but the addition of the cortex sub-processor makes the system somewhat more capable than your standard issue frog.

My conjecture is that the BIOS is indeed subcortical, but the OS is Consciousness. You might counter with “then what is the OS doing in a decorticated mammal or non-mammalian species?” I will argue that their computational architecture is like a µController. Fully computational, but with a firmware OS. The main OS in a human is self-organizing and programmed via the language learning process. It is via metaphor that this OS is organized and learns. The metaphoric lexical framework gives rise to Consciousness and is learned. At some point, this OS establishes a Self and off we go.

A good reference to help see this is Lakoff & Johnson’s Metaphors We Live By.

2 Likes

A slight tangent onto Intelligence (rather than consciousness), but any thoughts on Chollet’s paper?

Seems a slightly more tractable set of definitions - at least for those in ML or Robotics.

2 Likes

Anyone can compile laundry lists. A concept must have a single definition, that’s what makes it distinguishable. Multiple definitions is not a definition, it’s an admission of cluelessness.

2 Likes

Sure but a laundry list might be better suited than a definition when you attempt to build or replicate a machinery.

e.g. a fighter jet. No matter a good a single, simple definition you might have for that, “shopping” for a good, long, clear, prioritized list of capabilities/properties would allow engineers have a better idea on what to seek for & how to spend their efforts.

Fighter jet is a specific type of system, but general intelligence is a function. Not a fixed set of functions, otherwise it’s won’t generalize beyond that set.

ARC is a pretty good attempt at a general intelligence benchmark.
Author’s stated definition is quite simple, yet followed by few clarification points:

  • Intelligence is the efficiency with which a learning system turns experience and priors
    into skill at previously unknown tasks.
  • As such, a measure of intelligence must account for priors, experience, and generalization difficulty.
  • All intelligence is relative to a scope of application. Two intelligent systems may only be meaningfully compared within a shared scope and if they share similar priors.
  • As such, general AI should be benchmarked against human intelligence and should be founded on a similar set of knowledge priors (e.g. Core Knowledge).
1 Like

It could be so, but there are so many divergent assumptions made about intelligence by really intelligent people, so I’ll really believe one when I’ll see it actually working.

Just to keep on with the analogy of the fighter jet - it might well be a machinery but its function is simple and clear: ability of maximizing enemy aircraft kills while minimizing the chance of being killed. Nice and clear definition of fighterjetism, through a practical measurable function that needs to be implemented by an aircraft in order to be named a fighter jet.

Useful? probably. Sufficient? nope. Cant’ derive an implementation of the function by simply having a nice and clear definition.

All analogies are flawed, analogical thinking itself is a human flaw. And no analogy is even remotely suggestive for GI, because no other function is supposed to be general.

It was only a counter example to the implied argument that a laundry list is worse than a singular definition. Flawed or not (flawed compared to what) that’s how humans work, they start collecting information about a problem, make assumptions about which of these properties are relevant, run tests etc… before figuring out clear and simple definitions.

All science&technology it’s pretty much the good old evolution catalyzed (not caused nor controlled) by the elusive human intelligence.

Compared to analytical. Examples and counter-examples are analogical thinking in action.

It could be, but you, I and everybody else when arguing on the topic are just placing bets on their favorite horses. Which is fine (and part of the big evolutionary process).

I’m more attracted to animal intelligence and attempting to figure out the “general” one stepping up from it. And speaking of flaws, I won’t hurry being dismissive about them, it seems a lot of illusions & biases are in fact shortcuts allowing efficient processing.

2 Likes

Animals (obviously) have general intelligence, some more than others. My interest lies in a working definition of intelligence that is good enough to act as a metric.

Example: a working dog learns to herd sheep, a horse learns to race or jump, a rat learns to run a maze, and some do it better than others. How can we define and measure how they do that?

2 Likes

Given its resistance to grasping, “intelligence” could be well a bundle of interlocking properties/parameters instead of a single definitory one. People made the word up to highlight particular circumstances in which one agent outperforms another agent with subtler means than pure physical prowess.

Now we got stuck with the concept, trying to find a defining, measurable principle behind it, pretty much as early biologist were considering they should distill “life force fluid” which makes life possible.

2 Likes

All of these are examples of learning. To cut through the crap, intelligence is a general learning ability. Specific performance in your examples depends on the task, which can be anything. So what’s a common selection criterion for learning vs. forgetting / filtering-out, regardless of task? You have to efficiently recognize / represent the damn task, and anything related to it.

That efficiency is a lossless component of compression in whatever you are looking at. You have to explore the environment first, with pure curiosity. Because unless it’s some stupid instinct, you don’t even know what the task is, that has to be learned too. Spans of input with above-average compression is patterns, the rest is noise. Why is this so damn hard to grasp?

1 Like

This is fundamentally animal behaviorism and a significant amount of research has been done and is ongoing. Let me point out that all mammals share a cortex and what we learn (no pun intended) from what animals are able to do with theirs translates to what we can do with ours sans language, which is the only differentiator other than size and structural specializations (dogs olfactory systems; cheetah visual system; etc., etc., etc.).

2 Likes

So my view is that animals build mental models of the real world, including location and the passage of time. They choose actions based on some desired outcome and past experience, updated over time, but we have no idea how. If we could write software to do that, we would have AGI.

2 Likes