Research: consciousness is to predict what follows action

Out perception or labelling of conciousness would appear to more closely matche the second order feedback effects after the attention phase (which is itself internal subconsious triggering of attentional direction). i.e. do we really “consiously” decide were to look, if we are talking about the >100ms perception gap in the paper ?

To me “conciousness” is a non critical label oof “shallow” intelligence, however more complex (longer conceptual sequences) aspects require that feedback loop in order to increase the temporal pool of concepts that are active. i.e. the feedback loop gets longer with higher intelligence (aggregate column sequential activation latency). The buffer feedback loop needs to be quick enough so that signal fade does not fade attention (different to localised thalamic inhibitory colum behaviour of attention) or what I think we then label as conciousness being an active pool of dispersed columns in differing temporal states.

We can “conciously” plan a set of actions (i.e. milling a piece of metal - tool path or concept - a block of metal) purely in an internal model perspective that includes no words at all in the process, which to me I fully agree with Bitking. Words are just a different set of conceptual information, no differerent to any other concept as they are all just column representations / artifacts. Words are “always” attached to existing concepts the brain has already learnt or we prompt for a fuller explanation to build the concept before we can attach a word (external sensory transfer mechanism, aka communication).

All concepts are just triggers or inputs, be they words, sounds, senses, etc. Next time you “conciously” decide to brew a cup of tea, reflect on how many words were involved in the “concious” process of intelligent behaviour choosing that particular cup and that amount of milk. Ok, yes, introspection…

That’s my thoughts as to the process and that’s why I’m not even trying to explicitly implement a conciousness because you can’t program it specifically into a model as it’s a byproduct of the overall operation.

1 Like

For me there is a sharp delineation between objective conscious state and subjective consciousness (although with both there is some blurring of terminology and boundaries).

  1. OC is science, SC is not.

  2. OC can be measured, monitored, quantified, categorised, recorded, studied experimentally; SC cannot.

  3. OC has states such as aware, comatose, asleep; SC does not.

  4. SC is experienced by introspection and communicated to others by verbalisation; OC is not [you cannot experience or report being in a coma, for example]

  5. SC is associated with an inner voice, language, visual recall, thinking; OC is not.

  6. We believe (but cannot prove) that the vast majority of humans have SC and that non-humans do not.

  7. We know and can prove that the vast majority of animals (including humans) have OC and that non-animals do not.

  8. There is no reason to believe that either OC or SC have any relevance to the study of AI or the quest for AGI.

Since all the above is based on objective scientific fact I stand ready to change my mind when and if someone produces evidence to the contrary.

2 Likes

I thought we’d already put this one to bed. You never consciously decide anything. Your subconscious makes the decisions and lets you know the outcome; you become aware of the decision you already made about half a second later.

You cannot know this except by laborious science. Your introspection is making you believe otherwise, but introspection is not science.

1 Like

SC is a fact, people just confuse the hell out it. Somewhat intentionally, to feel special about themselves.

Not to fan this flame, but yes. I personally prefer to just use Consciousness to refer to what humans, and only humans, have; but the masses insist otherwise.

If your test for C involves speech, then this should be clear. Go ahead, ask a dog about their day, or what they think of Nietzsche, or even simpler: what is their story? Dogs don’t have a story. More astounding, neither do chimps.

Oh, and it is not enough to converse. Alexa does that. I just asked ‘it’ and ‘it’ said “Hmmm…I don’t know that one.” Now think of your own story. How far back does it go? Should be the moment you achieved C. Now play your story forward in your mind. Now reverse it back to that time in Mrs. Smith’s 3rd grade class when Joey put a tack on her chair. Fast forward to today, can you visualize yourself eating lunch? That is Consciousness.

1 Like

I agree with you in part.

My use of “concious” refers in my world as an abstract feedback loop, because I’m just looking at the programatic temporal state of data within a process. In HTM in my world conciousness label would be just the active state of column activations and transitory state - a temporal snapshot of sorts. Replace concious with this description in the following.

We “conciously” change the decisions yet to be made.

That’s what the forecast loop / temporal pool state in the cortex is for. The itterative feedback loop and why it takes us way longer than 10-20 column sequence firing latencies to evaluate some outcomes (and many wave itterations across the cortex). Recursive amplification (changes to the result of the next burst in marginal cases due to latent signal decay) and ripple changes of the parths on each itteration, due to latency differences on axion lengths, mylination, etc.

To say we never conciously decided anything (assuming conciousness as a latency between initiation and cortex feedback) is not strictly true when appplying thalamic (and other) feedback loops from the cortex. Micro expressions are an example of the conflict between initiators / inhibitors. At this point I’m referencing signal initiators in the cortex as part of what I beleive people attach to the concept of conciousness, which in awareness occurs after the event has occured due to the overall latency in aggregate signal propogation through all the columns involved.

What we are “aware” of is an interesting state of maybe self dilusion after the fact. To me it’s like asking a basic computer program to re-write itself when the basic capabilities to do such a process do not exist (have not been added to the code base).

Maybe my concept of the atrifacts of the common label of “conciousness” is quite different to yours, because we all lack a common deffinition. Your definitions of OC and SC in my world make no sense at all because you can’t measure an aggregate pattern. To me it’s like asking to measure what a green field with cows in equates to.

1 Like

Idk, about that, my dog is 8 years old and suddenly started comunicating quite well non-verbally, she can understand a lot of what I speak and generally uses body language//barks to give yes/no answers, interestingly she only attempts communication if she can directly benefit from it.

1 Like

sigh

2 Likes

double sigh.

tbh, I dont believe language is what makes us humans special, I dont believe we are special at all.

there’s this endemic elitism about animals being inferior because their lack of a certain cognitive skill, and when I look at this I just see an manifestation of the “us vs them” problem.

2 Likes

According to the paper, and a quote from Daniel Dennett, Libet’s experiment was preceded by William Grey Walter in 1963. That’s 20 years earlier! And Walter’s experiment was quite bizarre:

3 Likes

I dont see the paradox here.

its so surprising that we experience our own decisions the same way we experience other senses/feelings.

the feeling of taking a decision is not the decision itself.

I suppose thats useful for fast reaction times.

1 Like

Well, with the latest technological leaps, language is no longer a barrier. https://www.youtube.com/shorts/_AOS_CIPAA4

2 Likes

Who or what are you reacting to?

You are describing OC: objective C: you can observe your dog.

People claim to experience SC: subjective C, but nobody can prove it. There is no experimental data, only the subject’s words. You dog cannot make that claim.

1 Like

When first starting reading this article I rolled my eyes: yet another “opinion” article about consciousness?
However this one stands out as excellent out for a few reasons:

  1. It summarizes the prior art, so this article should be a good starting place for learning about the topic. They also explain how their thesis is related to the prior art.
  2. The thesis is actually a pretty good idea.
4 Likes

So I think this is a rational for silent speech / sub-vocalization. “Action” itself is inconsequential, it doesn’t get through neuro-muscular junctions, but the feedback of it forces us to think about what we would be saying. Probably the same for other types of imaginary or seemingly inconsequential actions, such as writing down things we already know and probably won’t look at again.

3 Likes

I was not talking about consciousness just stating that the assumption of “language is prerequisite to consciousness therefore animals aren’t conscious” must be false.

I was reacting to this part of your post.

1 Like

Oh, I see. What I meant was that apparently they got to insert electrodes in people’s neocortex, and then hooked them to a slide projector. Nowadays we raise concerns when researchers do that to rats.

Different times I guess.

But I am not at all surpised by the results nor the conclusion.

1 Like

I am not convinced by using electrodes to infer choice a few hundred ms. before the human becomes conscious of the respective choice.

Here-s why:

  • let’s assume there is some voting involved in every choice we made. Some columns/areas get “activated” by recent past experiences and they get to decide together each sub-unit from its own narrow(-er) perspective
  • during normal thinking choices are “easy” which means all sub-units are pretty close to a consensus.
    What does this mean, e.g. (an ML example) if we have a trained classifier on MNIST, more than 90% of the times it is very confident. That means there-s a clear winner, and no “competition” between whether is this a digit for “2” or a “5”.
  • let’s assume that in order to increase precision there are “processors” handling the 10% ambiguous images. Which might be well over 10 times more “expensive” than the simple … logistic-regression style classifier that hits it right 90% of the times.

So, unless the experiments above where electrodes output predict conscious choices with 100% accuracy, they might as well just tap only into intermediate stage predictions of the whole choosing process, which are only measuring a probability of actual future choices, instead of the actual decision.

Maybe there are always consistency checks done before voting on outputs of many “easy” stages.

2 Likes

reminds me of this: