Modelling AI as an experiencing stream

Hi,
Here I’m rambling about a potential formalizing perspective upon human intelligence. I don’t know if this perspective is useful == if it will lead to an applicable theory, but… it is something that bugs me and I feel I should share it.

First consider the abstract (or artificial if you like) concept of qualia being a quanta of a recognizable thing. Like a color, a pain, an emotion, my bicycle, the general concept of bicycle, the word “mother” or an abstract math concept like Pythagoras’s theorem.
We humans (and likely mammals and birds) live a continuous conscious experience which (presumably) can be encoded as a sequential stream of qualia, one after another.
Otherwise said, during our lifetime our conscious experience can be encoded as a stream of qualias.

To summarize:

  • a qualia is a quanta of the stream of conscious experiences. We can also call it an “atom of experience”.
  • a qualia is distinguishable from any other qualia. It might be “closer to” or “farther from” another qualia (an “apple” is close to a “pear”, and a bit further to a “plum”), but the fundamental property of any qualia is we can “see” it is different from any other qualia.
  • a qualia can be recognizable, and during the process of acquiring experiences the vast majority of qualia we experience become recognizable. Which means qualias represent the means to encode recognizable past experiences, the basis of learning.

Notes (or noticing?):

  1. I am aware the term qualia is a loaded and controversial (scientifically, philosophically) but if I called it “thing” would leave out the fact it is experience-able. A “thing” can be something I haven’t experienced or learned yet. We can consider it a “unique pointer or identifier to something”.
  2. I’m also aware that every moment of conscious experience feels more rich, like a set (a multitude) of things experienced at once - I experience the sequence of ideas I write here, the corresponding words in which these ideas are encoded, the room I’m, the screen on which words appear, etc… but let’s assume this conscious experience, at every moment, has a “focal point” of a single qualia. The “important” thing. It feels like there is only one “attention focus” which quickly switches sequentially from screen to words, to whatever else seems to be “nearby available” within the “current slice” of conscious experience. But what is important (or seems to me at this time) is there exists this one (no more) attention focus which moves from one qualia to another in a certain order, not two at the same time.
    This is very peculiar considering how massively parallel the neural structure of the brain actually is.
  • So I have to insist on this odd peculiarity. Because on one hand brain can do many things in parallel, on the other it spends a lot of effort to narrow everything down to a slow, very inefficient experiencing stream for the attention focus.
    How slow? Some studies say a few dozen bits per second, I’d estimate only a few qualia per second. No more than ten.
  • Being so inefficient it has to be important. I speculate here that understanding why it is important will help us to figure out the what it does and how it does and follow on understanding/modelling a human-like intelligence makeup.

So please indulge me to follow on the above ideas, even this might not be the right place for my ramblings. I do it here because the very sense here might be somebody to understand or even follow this exploration helps me focusing. I have problems with focusing and persistence.

1 Like

So, am I right in inferring that you suggest that once a qualia is learned enough to be recognized as a distinct experience, it is sort of ‘compressed’ down to the equivalent of just a few bits’ worth of memory?

One important observation is that everything we know and is outside the prior knowledge as newborns had to pass, at least once in our lifetime, through the stream of conscious experience.
Walking, talking, eye-hand coordination, reading & writing, typing, everything that I know or can do, I experienced it (== was passed through conscious attention focus) sometimes in my past.

And to learn some-thing I also needed to re-experience it several times for most of the times.

There are also things which are sufficient to experience only once in order to later recognize and distinguish from other things - a prominent example are human faces. But this might not be entirely true, even when we meet a new person we do not look at their face only once. The attention focus repeatedly turns to the new face again and again.

So one important hypothesis is that new qualia are generated (assembled) by finding sequential patterns in previously experienced fragments of streams of another, already-known qualia.

(re)imagine the process of learning of how to ride a bike. If you show a bike to an aborigine that never saw a bike they quickly generate a recognizable qualia for that previously unseen artifact and associate word “bike” to it, but wouldn’t know that “riding the bike” is another thing unless they see (==experience the conscious stream of looking at) somebody actually riding the bike.

Once they see somebody riding it, the new “bike” artifact might become interesting in the sense they will want to examine it closer. This translates in attention focus being moved towards
the bike that will produce new series of experiences looking at its parts, touching it, sensing its weight and learning its shape. Still one experience after another but a different stream from previously known until the bike becomes a charted territory. The qualia of bike becomes richer, in the sense we learn it can be assembled from/disassembled into constituent parts (fork, handle, wheels, pedals, frame, chain, seat, and so on) which themselves some could be either previously unknown or known e.g. they might recognize wheels as a previously known qualia.

There is this intentional exploring which can be considered as attention moving around, seeking a sufficiently large number of experiencing paths within the new territory which is the “bike” qualia.

A charted territory or known thing is a qualia we can make predictions about the experiencing stream when/if the attention stream will travel around that thing in a path it hasn’t necessarily traveled before.

When we look at the front wheel we can predict what we’ll experience when fovea moves towards the “up”, “down” or “back” of the bike.

To summarize here:

  • new qualia are learned through persistent attention moving towards and around “new things”
  • most of the time our experiencing stream travels within charted territory.
  • a characteristic of intelligence is its ability to recognize previously unknown things out of otherwise charted territory we (==our attention focus) travels.
  • these unknown things tend to drive our attention to explore them in order to transform them in charted territories.
  • attention focus is driven either involuntarily or directed intentionally.
  • there has to be some selective bias which marks new things as interesting or not. The more interesting the more it is explored by attention.

Ok, I’ve said I’ll describe how we learned to ride a bike but I described how we learn what a bike is. Learning how to ride it happens just the same way, except the sequence of experiences we use to explore and learn the riding are muscle motions we attempt to make. We learn which paths (sequences) of sensing-motion-sensing-sensing-motion-motion we experience are useful and which are not.

One more observation - the focus of attention seems to be moveable as muscles are


A side note: the actual amount of distinguishable-recognizable things (== qualia) we experience during our lifetime is quite finite. At 5 new qualias experienced every second of our wakeful life there can be no more than few billions of qualia that bother us. In reality the vast majority of our experiences are not new, are somehow recognizable, which means a 32 bit integer (or pointer) should be sufficient to compress/encode/represent each qualia and a life long of experiencing could be stored on a single hard drive.
That’s very encouraging from the perspective of an artificial intelligence : even if its internal “brain” encoding relationships between qualias needs to be sparse and large, the actual awareness stream needed to learn or replay a past experience is quite manageable, a feature human brains do not have available.


Another note I’ll leave here for future ramblings is the above was partially inspired by recent developments of transformers and autoencoders.

@Andrew_Stephan I don’t know if it actually happens in our brains, but yeah that would be a very handy way not only to compress it but also to assemble/indexing/remember recognizable streams of past experiences.

1 Like

Yeah, that would be interesting. Perhaps an AI system could keep a running list of ‘experiences/qualia’ had over its lifetime, and a recognized qualia need only be identified by its index?

I don’t know but to me at least seems this could be a fruitful approach.
What I’ve done here is exploring from inside out. Our conscious/awareness processes are so familiar that we tend to not noticing them.
Close your eyes, wait a little then try to type letter V on your keyboard. It doesn’t happen instantly. Not as instantly as when we begin to type “voluntarily”. Somehow details of what we learned (or know how to do) too well seems to fade out of our consciously available experience.

Most attempts that I’ve seen towards reproducing intelligence tend to look from outside - e.g. towards neurons (artificial or not) or bayesian maths, and shy away, even abhor conscious experience. Not saying these are not useful approaches but could be incomplete.

While I understand the assertion, you are mixing apples and oranges.
I see a vast difference in how information is coded in the two systems (brain and computer) that makes this comparison somewhat meaningless.

In a computer there is some logical separation between the information represented and the storage location. Much of computer program function turns on the organization of the information and getting the right information to the logical processing unit.

If the brain the location of the information is hard coded. The steams of information follow stereotyped paths and and learning is embedded in these pathways. We also learn relations of these data with “side paths” (association fibers and association maps) as we learn the primary qualiia.
To the point that we seem to be limited to unitary states combining or recalling collections of qualia - these maps that hold relations have the same properties as the maps that hold qualiia - the combined action of qualia and the relations between them are a single unitary state of cooperation that form the “content of consciousness.”

Thinking of this unitary state, the subcortical structures may find some aspect of the current content of consciousness as meeting some current need state and add attention to it, adding activation from the frontal lobe. This may involve more than one map getting this attention; for example, apple and approach, Once these two maps (spatial relations and object stores) the other connected maps settle into the lowest energy configuration forming a new content of consciousness. This new combined qualia is processed through to the temporal lobe for the sub-cortical structures to experience and a new loop of consciousness starts.

In humans this activation from the frontal lobe is more precises and we have a much larger frontal lobe than most critters so our searches tend to be better and more focused.

I agree that we learn based on what we have learned before - we have an extremely efficient delta coding system. We attempt to recall our internal representation to match our perceived external reality. The bits that are novel attract attention and it is the novel bits we add to our internal representation - delta coding.

As we we learn more about something the information forms pools of learning at various positions in the perceptual hierarchy and association maps. This novelty at whatever position in the hierarchy or association maps automatically puts the learning the the best location to hold it. As we learn more about something what is novel may shift to new locations in the brain.

I have been posting on this basic system for a long time and there are some related posts that flesh out some of these concepts. Much of it is collected on this thread:

The key post is this one:

Search the forum for “loop of consciousness” to see many of my related posts on this topic.

Oh, and on that perception thing, this post posits the fundamental unit of perception.
I propose that a single thalamocortical resonance cycle in the cortex is the smallest quanta of human perception.

I propose that a single thalamocortical resonance cycle in the episodic portions of the cortex (those connected to the hippocampus) is the smallest quanta of human experience.

@Bitking sure our brains do not use 32 bits encodings for qualia, what I was trying to say was since we primarily use computers to model intelligence (we do not have much other tools available), this might be a useful shortcut to “compress” everything in computer terms, a shortcut (or more) that we don’t have available in our brains.

I’ll look into your suggested readings, even though, for me at least, is mostly uncharted territory.

What I’ll try to emphasize however is that somehow “compression happens” to our conscious experience regardless of underlying structure/encoding generating it.

And who knows, since in many ways computers/gpus/asics are both quite powerful processing tools and very malleable too, sometimes somehow someone will reproduce mind processes in a manner more efficient to “computer friendly representation” than exhaustive emulation of brain structures.

Regardless of how an AI will look like, I hope it will resemble in some way our own experience, otherwise it will be hard for us to relate with it and for it to relate with us at various levels.

I might be wrong but I think is important not to wake up as a species with “an alien intelligent something” among us but with something closer to a “familiar intelligent someone” instead.

Or we design computers to do the brain algorithm more effectively.

It could go either way - but - we know that the way the brain does it can make intelligent agents.

We don’t have any examples of Von Neumann
Computers making intelligent agents.

I don’t even know how to define intelligence in a useful, generative, not descriptive-only manner.

OpenAI’s hide and seek agents were able to discover quite clever solutions to achieve their goals. The real drawback is they needed to play millions of games to reach those performances. For me any evolutionary system is intelligent but slow. https://www.youtube.com/watch?v=kopoLzvh5jY

1 Like

I would explore the why-s and how-s a dense encoding (32 to 64 bits long) of what I refer above as “qualia” in another thread. So if you indulge me, will discuss somewhere else the merits of computer representations which are unfortunately so much unlike biological ones.

SDRs on the other hand, seem to be a more appropriate encoding of qualia for several reasons.

One main reason is the “felt sense of proximity” between related qualia. E.g. “orange” is somehow close, and in between “yellow” and “red”. If color sensations are encoded as SDRs, then this proximity could be reflected as more overlapping bits between SDR(orange) to either SDR(red) and SDR(yellow) than in between SDR(red) and SDR(yellow) themselves.

Furthermore, the whole color spectrum could share a few common bits which would identify them as “visual sensation” not as something else e.g. “olfactory” or “emotion”. This could be extrapolated to encode any other domains of similarity. E.g. similarity between “water” and “air” as being both “fluids”.

Also SDRs allow various degrees of proximity between different qualia. They allow a gradual and intentional shift between SDR(quale_x) and SDR(quale_y) which could be a base of exploring potential “new things” out of “known things”

And one more interesting property of SDRs is they allow arbitrary overlapping / composing of few qualia - e.g. SDR(“blood”) could be a simple bit string overlap for SDR(“red”) with the SDR(“water”). A search in a (hypothetical) “space of resemblance” for “what is close to blood?” would point to both “red” and “water”.

Note: only now I learned “qualia” is a plural and “quale” is its singular

Sensation modalities are coded by connection.

Sure but… what that means? When you say connections a question which arises is connecting what to what else? Have I always been aware of the blue-ness of the “Reply” button just under this text I’m writing right now?
Or are they encoded through the activation of connections and SDRs are just a convenient way to represent together a set of activated connections (spiking synapses if you like) at a certain time in a certain place?

The hypothesis I’m tossing out here is

  • there is a certain locus where all “knowable things” (qualia) can become visible.
  • this locus has a means to differentiate between different qualia,
  • but also to sense a resemblance (closeness) between related qualia.
  • and also is very capable to assemble together several qualia into new ones. Think of Sponge Bob Square Pants who is a dude named Bob who is a sponge and has square pants. Kids can assemble imaginary creatures instantly, eyes-closed, without having pre-existing connections to “encode” them. Of course there are lots of previously existing connections which encode “pants” “person” and “sponge” how come even when I have never seen such a creature when you show me a cartoon picture without any writing I would know that has to be the looks of the character named “Sponge Bob Square Pants” some kid told me a story about?
    I mean connections are implemented through synapses, and from what I know, synapses can’t pop instantly into existence, while a new idea or concept I learn about can. How is that possible?

Another speculation one can made is all these mental representations have some common representation since

  • all can “fit” the relatively narrow space of conscious awareness.
  • any qualia can be associated with any other and in the process of repeated conscious process of “appearing together” some connections between the respective regions start to form indeed.

My sense is there has to be a means to uniformly encode all I can be aware of in this space and SDRs happen to have properties strikingly matching qualia properties.

1 Like

The sensory streams are joined in the association maps. There the SDR’s that combines the senses can be formed.

Note that - to my way of understanding- an object is a basket of features. Both spatial and temporal.

It make a great deal of sense that this basket of features can include more than one sensory modality.

The quality processing stays in the stream where it is sensed. The micro-parsing that extracts levels of spatial and temporal information stays in the stream and presents that as the basket of features to be associated.

The counter-flowing streams act to help prime and parse the streams; a key part of prediction.

This model that is formed is the armature that is compared to the sensory stream. Any difference is novelty to trigger learning.

I would think more complex than a basket of features in order to categorize e.g. a bicycle as being a bicycle

  • features may have relationships with each others.
  • a peculiar relationship between two features count as a feature in itself.
  • if we look we can discern many features in a single bike yet we need to extract a minimum sized set of relevant ones for various reasons
  • by “relevant” means
    – that if we pick an even smaller set - one to three key features - would be sufficient to categorically classify that thing as unlikely to be other thing but a bike.
    – missing a relevant feature - e.g. a front fork - disqualifies an object as being a bike.

e.g. “pair of wheels” feature narrows possibilities to a handful of categories.
But “two wheels” could also be a sulky, a hoverboard so a refinement would be:
“two wheels behind each other” - a positional feature relative to another feature narrowing the bag of features a bicycle could have to a set of descriptions which have these three important characteristics - to tend towards a set as short as possible , as selective as possible and (dis)qualification - a missing key feature would disqualify any particular object from a category.
Interesting that as kids we like to play with riddles:

“wheel follows the wheel, but no seat for your butts”.

Reminds me of Dileep George’s idea that solving captchas is the key for visual understanding, probably ability to solve riddles is the key for conceptual, common sense understanding. And they shouldn’t be much different, visual and conceptual understanding should be the same process applied to different domains.


What is interesting is that a sequence of words can be used both to describe a visual, physical, abstract concept or to evoke a stream of experiences.

And both descriptions and experiences are uni-dimensional walkthroughs within multidimensional territories: “f1 f2,…fn” notice how simple a sequence of features can be mapped to a sequence of numbers/names/words/identifiers,
and notice how features can be nodes:
You (your attention) can walk through “f1, f2, f3” but also “f1 f2 f7”. “backwheel, chain, pedals, seat support, seat” or could follow “chain, pedals, diagonal brace, fork, frontwheel”. “pedals” is an intersection.

We learn a new object through:

  • a sort of “walking” within its “experience-able territory” , stepping from one feature to another.
  • select “key features” and short sequences of these in a group that is
    – minimal (as few as possible),
    – essential (disqualified by missing an essential feature),
    – selective (as few as possible other objects/concepts with similar bag of features)
1 Like

Perhaps when I said “Both spatial and temporal” I should have been more explicit.
Much of the sensory hierarchy is involved with extracting both spatial and temporal features, and yes, part of that spatial thing is the relation between sub-features.
This is the basket of features that are presented to the mid-parietal region to be integrated into a hex-grid pattern that stands for an object. (Or the TBT if you swing that way)

The chain and wheel level are each baskets of features, collected together on some level of recognition.

Recognizing (aside from tachistoscope presentation) is a serial process. You see a large, low resolution, primitive shape. Your subcortex forces your FEF to scan this shape in a stereotyped way and you brain collects these features in a sort of “20 questions” game that feeds your WHAT and WHERE streams to build up recognition. This process is mostly automatic and happens below the level of conscious control.

The build up of features past this point uses activation of several interlocking maps; like letters in a word. The parts lock in the serial recognition process like the wheels in a old mechanical slot machine.

If you focus on one sub-set of features (projections from the forebrain adding activation) The rest of the connected maps will settle on the lowest energy configuration- shifting levels. This evolution of the contents of consciousness can be considered thinking.

This shift of attention is entirely driven by subcortical structures driving the forebrain to look at things or drive attention to some features in the mix of perception.

This is a bit more involved than what I expressed as “spatial and temporal features” but answering everything with a wall of text gets to be a bit much. And in case you are wondering - I am working on a much larger version of this answer, somewhere between paper length and book length. It keeps growing as I work on it. Yes, I am all about the macro model.