We suggest that there is confusion between why consciousness developed and what additional functions, through continued evolution, it has co-opted. Consider episodic memory. If we believe that episodic memory evolved solely to accurately represent past events, it seems like a terrible system-prone to forgetting and false memories. However, if we believe that episodic memory developed to flexibly and creatively combine and rearrange memories of prior events in order to plan for the future, then it is quite a good system. We argue that consciousness originally developed as part of the episodic memory system-quite likely the part needed to accomplish that flexible recombining of information. We posit further that consciousness was subsequently co-opted to produce other functions that are not directly relevant to memory per se, such as problem-solving, abstract thinking, and language. We suggest that this theory is compatible with many phenomena, such as the slow speed and the after-the-fact order of consciousness, that cannot be explained well by other theories. We believe that our theory may have profound implications for understanding intentional action and consciousness in general. Moreover, we suggest that episodic memory and its associated memory systems of sensory, working, and semantic memory as a whole ought to be considered together as the conscious memory system in that they, together, give rise to the phenomenon of consciousness. Lastly, we suggest that the cerebral cortex is the part of the brain that makes consciousness possible, and that every cortical region contributes to this conscious memory system.
The following article provides supporting evidence for the above article.
Memories with a blind mind: Remembering the past and imagining the future with aphantasia
Our capacity to re-experience the past and simulate the future is thought to depend heavily on visual imagery, which allows us to construct complex sensory representations in the absence of sensory stimulation. There are large individual differences in visual imagery ability, but their impact on autobiographical memory and future prospection remains poorly understood. Research in this field assumes the normative use of visual imagery as a cognitive tool to simulate the past and future, however some individuals lack the ability to visualise altogether (a condition termed “aphantasia”). Aphantasia represents a rare and naturally occurring knock-out model for examining the role of visual imagery in episodic memory recall. Here, we assessed individuals with aphantasia on an adapted form of the Autobiographical Interview, a behavioural measure of the specificity and richness of episodic details underpinning the memory of events. Aphantasic participants generated significantly fewer episodic details than controls for both past and future events. This effect was most pronounced for novel future events, driven by selective reductions in visual detail retrieval, accompanied by comparatively reduced ratings of the phenomenological richness of simulated events, and paralleled by quantitative linguistic markers of reduced perceptual language use in aphantasic participants compared to those with visual imagery. Our findings represent the first systematic evidence (using combined objective and subjective data streams) that aphantasia is associated with a diminished ability to re-experience the past and simulate the future, indicating that visual imagery is an important cognitive tool for the dynamic retrieval and recombination of episodic details during mental simulation.
We’re still missing the “agentive” component. Memory and pattern recognition is vital, and likely steers cognition, but how do we account for the active aspect? Assuming a blank slate (a new baby, or earlier), they still have core programming of a sort that results in the development of the cognitive and “consciousness” system. Someone with memory defects is still “conscious” as well. So it’s one large piece of the puzzle, but not the whole puzzle. This is the part that still eludes me. It’s quite obvious that pattern recognition and a good architectural structure of recall is important, but this is the low hanging fruit of consciousness.
For example, it’s status quo to use a Bayesian probability framework for predictive measures. But in reality, it’s more likely that we develop our own unique version of “probability” based on our experiences. It’s not like we have Bayesian statistics hard-coded in our brains. So what is this initial seed that leads us to developing our own predictive and probability measures?
The problem with this study may well be that they are missing one very critical aspect, namely does visualisation inherently limit your imagination to what you can visualise and therefore always draw back the imagination into this bound ?
I have what I believe to be aphantasia (when I realised, not long ago, I asked people if they could visualise a “blue horse”) but the way I “see things” is more just the abstract concepts with no image. That’s even thinking through the design and build of complex machinery. It’s just abstract patterns.
Where, for me, I beleive it originated is the heavy focus on programming since 10, where you have nothing to visualise and it’s all just abstract patterns and concepts. So, with plasticity, you just end up with that visualisation ability being weakened so much or lost that you can’t visualise a “blue horse” if you don’t have a prior memory of a blue horse to recall.
I can replay memories and have an ability to pattern match some very odd things, as I have been told, from prior memories, so that seems to conflict a bit with some aspects of the paper as well.
“aphantasia is associated with a diminished ability to re-experience the past” : I would disagree with this statement.
To me using visualisation for everything would create so many problems and slow everything down, so i “see” no benefit in trying to re-train my brain that way.
I don’t hear words when I read, nor really envision the settings and clothes described in detail in stories. The facts are absorbed directly and a much higher speed than speech - measured at 300 to 1000 WPM depending on the material. I find reading aloud to be annoying - reading to my kids was a real task for me.
Likewise, plodding though a video when I can read the text is annoying to me.
As @BrainVx relayed, I can perform things like mental rotation to describe object transformations but without “seeing” the object. The geometric relationships are done in a space “outside” of images.
I don’t have any base to compare my performance against other in the tasks named in the original post but I do know that when working with other engineers, not being tied to mental images means that I can often sort through engineering problems which require spatial understanding faster then my co-workers. I frequently have to stop and explain the relationships in a problem to them so they can catch up to me in brainstorming sessions.
I do feel that imagining the past and future is also done in this space “outside” of images.
I think in order to define consciousness, we probably need more info from neuroscience. People have been discussing consciousness for over 2000 years and still can’t agree on a definition. I’m not saying there aren’t good ideas, just nothing proven.
The brain has various mechanisms vaguely related to our vague concept of consciousness, so those mechanisms could gradually clarify what consciousness is. Or at least provide a better basis for defining consciousness.
In this context the word “consciousness” refers to the phenomenon of “conscious access” which is defined by the global neuronal workspace theory as follows:
[…] conscious access is global information availability (see Baars 1989): what we subjectively experience as conscious access is the selection, amplification and global broadcasting, to many distant areas, of a single piece of information selected for its salience or relevance to current goals.
And conscious access can be measured both subjectively and objectively:
As noted by Baars (1989), the experimental study of the mechanisms of conscious
access requires the definition of a minimal contrast between a situation in which
information is consciously accessed and a similar situation in which the same
information is only processed non-consciously. Many such contrasts are now
available (Kim and Blake 2005). Our own brain imaging work relied primarily on
two techniques: retrograde masking, where the stimulus is flashed for a perceptible
duration but is made invisible by the subsequent presentation, at the same location,
of another shape, called the “mask;” and the attentional blink (AB), where a brief
target, presented for a duration that would be perceivable in isolation, becomes
invisible once the participants are temporarily distracted by a concurrent task. In
both cases, functional magnetic resonance imaging (fMRI), magneto-encephalog-
raphy (MEG), electro-encephalography (EEG) and intracranial recordings can be
used to record the progression of activation in the cortical hierarchy under
conditions of conscious versus non-conscious perception.
Such a research program requires a consensus on an empirical criterion to discrim-
inate conscious and non-conscious processing. According to a long psychophysical
tradition, grounded in signal-detection theory, a stimulus should be accepted as non-
conscious or “subliminal” (below threshold) only if subjects are unable to perform
above chance on some direct task of stimulus detection or classification. This objective
definition raises problems, however (Persaud et al. 2007; Schurger and Sher 2008).
First, it tends to overestimate conscious perception: there are many conditions in
which subjects perform better than chance, yet still deny perceiving the stimulus.
Second, performance can be at chance level for some tasks but not others, raising the
issue of which tasks count as evidence of conscious perception or merely of subliminal
processing. Third, the approach requires accepting the null hypothesis of chance-level
performance, yet performance never really falls down to zero, and whether it is
significant or not often depends on arbitrary choices such as the number of trials
dedicated to its measurement. For these reasons, our research has emphasized
obtaining subjective reports of stimulus visibility, if possible on every single trial
(Sergent and Dehaene 2004). Such subjective reports are arguably the primary data of
interest in consciousness research. Furthermore, reports of stimulus visibility can
be finely quantified, leading to the discovery that conscious perception can be “all-
or-none” in some masking and AB paradigms (Del Cul et al. 2006, 2007; Sergent and
Dehaene 2004). Subjective reports also present the advantage of assessing conscious
access immediately and on every trial, thus permitting post-experiment sorting of
conscious versus non-conscious trials with identical stimuli (e.g., Del Cul et al. 2007;
Lamy et al. 2009; Pins and Ffytche 2003; Sergent et al. 2005; Wyart and Tallon-
Baudry 2008). Importantly, objective assessments, wagering indices and subjective
reports are generally in excellent agreement (Del Cul et al. 2006, 2009; Persaud et al.
2007). For instance, the masking thresholds derived from objective and subjective data
are essentially identical across subjects (r = 0.96, slope ~= 1) (Del Cul et al. 2006).
Those data suggest that conscious access causes a major change in the availability of
information that is easily detected by a variety of subjective and objective measures.
Source: The Global Neuronal Workspace Model of Conscious Access: From Neuronal Architectures to Clinical Applications, Stanislas Dehaene, Jean-Pierre Changeux, and Lionel Naccache (2011) DOI:10.1007/978-3-642-18015-6_4 (free full text)
I’d like to stress that aphantasia is a part of the normal diversity of healthy human brains. At approximately 2-3% of the population, it is actually rather common. It makes for interesting case studies in the powers and neural-correlates of imagination.
Lives without imagery - congenital aphantasia
Zeman, AZ; Dewar, M; Della Sala, S (2015)
[Participants in this study] identified compensatory strengths in
verbal, mathematical and logical domains. Their successful performance in a task that would
normally elicit imagery – ‘count how many windows there are in your house or apartment’ -
was achieved by drawing on what participants described as ‘knowledge’, ‘memory’ and
This article demonstrates a more practical application of imagination. They find that normally: math is associated with auditory/speech centers of the cortex, but after abacus training math is also associated with visual and tactile regions as well.
A Review of the Effects of Abacus Training on Cognitive Functions and Neural Systems in Humans
Abacus, which represents numbers via a visuospatial format, is a traditional device to facilitate arithmetic operations. Skilled abacus users, who have acquired the ability of abacus-based mental calculation (AMC), can perform fast and accurate calculations by manipulating an imaginary abacus in mind. Due to this extraordinary calculation ability in AMC users, there is an expanding literature investigating the effects of AMC training on cognition and brain systems. This review study aims to provide an updated overview of important findings in this fast-growing research field. Here, findings from previous behavioral and neuroimaging studies about AMC experts as well as children and adults receiving AMC training are reviewed and discussed. Taken together, our review of the existing literature suggests that AMC training has the potential to enhance various cognitive skills including mathematics, working memory and numerical magnitude processing. Besides, the training can result in functional and anatomical neural changes that are largely located within the frontal-parietal and occipital-temporal brain regions. Some of the neural changes can explain the training-induced cognitive enhancements. Still, caution is needed when extend the conclusions to a more general situation. Implications for future research are provided.
examining recovery-related brain activity in an AMC user with a right hemispheric brain lesion. The participant had received 3 years of AMC training at an abacus school. After training, she kept using AMC in everyday activities, and became a finalist at a domestic abacus competition. In July 2009, the participant suffered from a right hemispheric infarct in the anterior and middle cerebral arteries. 6 months after her stroke, she reported that, although her knowledge of basic arithmetic facts and related operations of a physical abacus were intact, she could not use the visuospatial imaginary strategy for either mental arithmetic or digit memory. The first fMRI scanning was conducted at that time. Language-related brain activity including the Broca’s areas were observed during both mental arithmetic and digit memory tasks. Thirteen months after her stroke, she reported that she was able to shift the mental arithmetic strategy from linguistic to visuospatial representations, and her superior capacity for digit memory recovered. Then a second fMRI session was conducted. Interestingly, visuospatial-related brain areas including the bilateral frontal-parietal network were activated during both mental arithmetic and digit memory tasks.
The problem I have with this is not that the conscious recalling may help in better decision making, but rather that it is dependent on the conscious phenomenology. It could very well be that the richer signal that helps the better decision making also produces conscious imagery as a side effect.
As a metaphor, consider two water channels behind a dike. One channel directs water downstream. A second one directs water through a water wheel and then directs the water downstream. This second channel is usually closed off when we don’t need the wheel to turn. When both channels are open, more water flows through, but the higher water flow is not dependent on the wheel turning. The wheel is turning because the water flows through the second channel.
Marcus Hutter collected 100 something definitions of intelligence, if I remember right. And of course suggested his own, AIXI. Attempts to define conscious is not that different in intent and yield. Try to define a bicycle.
IMHO a system (bicycle, conscious,…) is only definable when all the subsystems and their interactions are definable(defined already).
So, all attempts to speculate on the subject are doomed. Define subsystems, explore interactions, implement a model as a ground proof(truth) - that’s the only way. Stop talking the talk, start walking the walk. “But you’r bunch of cowboys”(c)
Just could not keep my mouth shut, sorry guys .
“Did I miss”… you kind of did. Not that many people did not Anyways. There is a 4th generation of a big LLM at GitHub - MasterAlgo/GPT-Teaser posted two years ago. (Big because it builds a network with tempo of 1bln/hour of parameters. The growth is controlled and defined by available RAM ). That thing is built on stated paradigm: a system is a collection of subsystems and their interactions. It is structurally and synaptically plastic. It solves continual learning, classification, generation, knowledge transfer, probably The Dimensionality Curse, +++
It is difficult to read, I never intended it to be well documented - can give you the reasons, but that is not important. People are not interested anyways.
To make it simpler have distilled it to 500 of Java lines + some telemetry. It calculates continually similarity of arbitrary number of sequences of different lengths, noisy and irregularly sampled. If you r interested: GitHub - MasterAlgo/Simply-Spiking
Have tried it with images as well, but my resources are limited. I cannot implement everything (text to speech, chess). Isn’t that curious enough that my LLM works by comparing 10^6 vectors of tokens?
Anyways, leaving for a week. No offense meant. Peace.
Maybe I am just dense but how to I go from your code to ChatGPT?
Reasonable minds may disagree that ChatGPT really is intelligent but I would expect any pretender’s to the intelligence crown would work at least as well as any of the current crop of LLMs to be taken seriously.