Research: consciousness is to predict what follows action

Yes, there are plenty of places where we have found that maths works, but we have absolutely no idea why. Nature doesn’t know that. Nature does what it does, and we through our sense organs and intelligence see patterns.

We (and other animals) see the patterns and rely on them to choose actions that help us survive. Why does that work? Depends what you believe in…

2 Likes

What I do believe is necessary for consciousness, is not speech, but constraint of inner representation. For example there are animals that are said cannot distinguish an object from a photo with a random rearrangement of their features. Clearly for visual consciousness I believe that the locations of features must be fixed in the inner representation and recognized such that one doesn’t confuse them with random arrangement of their features.

There are humans with damage to speech production or understanding areas that are still conscious nonetheless.

The problem is the color phi phenomena. You can imagine an experiment where people are asked to make a decision, say push a button, immediately when they perceive the color change. The problem that will happen is that the action is perceived after the event, that is the conscious sensation occurs after the button was pushed or not, so the decision to push it and even whether to choose not to at the last moment would all be done unconsciously.

1 Like

As per my prior post, my definition/understanding of the mechanics is different and is more just an inner temporal state rather than what we “think” it is. We appear to “think” well after the fact and I beleive awareness is sort an illusionary state. Within my own activities I regard conciousness as a temporal state (<150mS) and not awareness (longer temporal transitory states > 250mS).

I believe conciousness* (awareness) is purely a mechinism/means/artifact to facilitate external communication as it is a much longer temporally complex environment compared to internal timing, whilst having very minimal relevance for the ability of the core to operate. Externally triggered surprise is far more complex than inner capabilities because inner surprise can only occur through forecast/prediction overlaps/conflicts/collisions.

When someone is sleepwalking, are they concious ? If not (as per the legal definition, aided by sience) then all of the activities and actions a person carries out (brain area activity) whilst in such state must therefore be excluded from the definition of being concious ? What we would be left with may well be a very, very small area of inhibition origination as a definition as to what we label conciousness as or even just a definition of GABAA levels ? Or is consiousness just a particular temporal state regardless of specific brain areas ?

1 Like

Agreed. It is a necessary, but not sufficient condition.

1 Like

I agree.
I think the paper is well put and the model is simple yet covers an incredible amount of ground. This is usually a good sign IMHO.
On the downside, the paper is a bit long but only because of the depth of evidence.

It is logically compatible with other models (besides the many they quote) but has empirical predictions, which can be tested. To be fair, it may have been created to resolve many existing empirical questions, including temporal ordering problems, and disease in real brains. Much overlap with Anil Seths practical approach I thought. The association with attention modelling (AST) is strong but not mentioned.

The fact that the theory has an evolutionary context I think is also a most refreshing step.

They explain this as a memory first model, but they expect adaption to do other things beyond memory (eg. time-travel/prediction, problem solving, abstract thought) using the same components. This feels very much like adaptive biology over evolution. Such re-use should be testable.

I thought they were very slight on the implications for language, especially on the language Now-Or-Never bottleneck which is often ignored (due to inappropriate computer analogies).

There probably are implications for how ML memory systems should be designed. Anything that has an historical reachback potentially beyond vertebrates, suggests the potential power of such a model.

I’d be very interested in seeing the future results of their work.

2 Likes

I feel sorry for the OP for posting something off-topic, but I’m curious if your dumb-boss & smart-advisor metaphor could possibly mean that a person’s high IQ could be due to having a really smart dumb-boss or a really smart smart-advisor (or both)? If possible, how would they think differently?

I think that this post addresses your question; spoiler alert - it’s mostly smart advisor.

2 Likes

I mentioned the overlap of this memory model with some of Anil Seths work above.

Anil seems to be funded from the European Research Council (Sussex neuroscientist wins €2.43 million grant to probe mystery of human consciousness : Broadcast: News items : University of Sussex) to develop consciousness models.

He published (May 2022) a review of what he considers the 4 major model types, with a view to bringing them on to a more useful position where they make testable hypotheses for neuroscience.

He seemed not to know about Budson’s (OP) work yet, when mentioned.

The main paper is here (paywalled):
https://www.nature.com/articles/s41583-022-00587-4

However if you go via his site:
https://www.anilseth.com/research/current-research-highlights/

You can see a full but read only (DRM’d) version:
https://www.nature.com/articles/s41583-022-00587-4.epdf?sharing_token=YcY6bzXl0iqFYKrqtykdLNRgN0jAjWel9jnR3ZoTv0OlRlPtg3bVLf-Jc8wcElS4cYy8AzDVCWBxQOzhq6tjCaPtzaUOCVNudwUX_DHiGRbrwwYvSfYcJ-WgeYee3uFDjHJggIjwukEF0eyKzcSGFjW47xrxnt_yGTuxSkm_API%3D

2 Likes

For many years I’ve been developing a theory of consciousness and self which argues that all conscious experiences – of the world and of the self – are rooted in predictive models in the brain that are geared towards keeping the body alive. We are conscious ‘beast machines’, and consciousness has more to do with being alive than with being intelligent. I’ve also explored other aspects of the philosophy of consciousness, including the possibility of ‘islands of awareness’ in (for example) brain organoids.

Seth, A.K. (2021). Being You: A New Science of Consciousness. Faber/Dutton

Someone, please, just shoot me now. Julian Jaynes, now getting on to half a century ago. Yes, too difficult to read and even more so to understand.

1 Like

This just says models evolved for survival. All models are by definition predictive and all evolution is about survival, so it doesn’t say much. Models of what?

2 Likes

Exactly! …and another one jumps onto the Consciousness bandwagon.

1 Like

Consciousness is a licence for loose talk. That word should be banned.