Relevant article by giants of consciousness research Christof Koch and Giulio Tononi: Can We Quantify Machine Consciousness?
Regarding IIT and quantifying consciousness, here is Scott Aaronson’s critique of IIT. He’s had an exchange of posts of sorts with Giulio that should be accessible from the text:
Excellent discussion of consciousness. it’s great to read and try to learn and understand the various theories about the topic. I’m still struggling with idea of dualism. I don’t believe I’ve ever really thought of my brains thoughts as being anything but my brain and body reacting to the “associative learning” that
I or my life experiences have programmed in there over time. I see/think of the differences in people’s perceptions of their lives and experiences as simple differences in the “associative learning”.
From one who was an early in life athlete , to now has sleep apnea ( with associated sleep deprivations issues) my own perceptions of my brains “Thinking ability” is highly variable on a day to day basis and I can feel it, recognize it, even plan my technical work versus physical work around " how effective my brain is being currently" .
I can still use my bodies experienced nervous system to control a shovel, a tractor, or a screw driver when I am aware that my higher level cognitive skills are slightly off/slow today. So, I don’t think of my brain / consciousness as something separate from my mind/body. Is this in-line with getting away from Dual-ism or am i just confused ?
@t.farley, I agree with your observation as well. Rather than seeing myself as dual-nature, I feel as though I have another “sense” that simply works along with the other five. I am able to use it in conjunction with the other senses in making decisions. Additionally in my experience, my thoughts seem far from being a sensor “in another room”. They are very much connected to things in the physical world. Seeing a cat, for example, triggers complex thoughts of warmth, purring, friendship, etc. The thoughts are very much connected to what I sensed with my eyes. This also matches the actual circuitry – signals coming from external sensors and lower regions in the hierarchy are affecting the same networks of cells as signals from parallel regions. As such, there is a temporal correlation between these signals.
A little bit off-topic because it’s not about consciousness at all, but it does discuss intelligence from an intriguing point of view (especially because it argues certain assumptions that seem to undermine HTM) : https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/
One takeaway is that in the quest for AGI we will inevitably stop copying biology and fork from it by creating an extension for our intelligence instead of a standalone intelligence that we would bid to magically solve our problems.
Even if we did not stop and we created human level like intelligence, we wouldn’t be able to make it useful for the same ethical reasons we can’t use human slaves. And instead of having limited control over it, why not just hire humans, it’d be much cheaper.
My working theory on consciousness is fairly simple considering how complex the central nervous system is.
Let me set the table on how I see the brain working:
Focusing on the H of HTM. (Hierarchy)
Consider this in a sensory stream general framework of (raw sense) to (collected sense/association) to (name/symbol) to (collected symbols/grammar) to (working memory) to (the emotionally colored here and now) in the tip of the temporal lobe. This is the stream of sensation that you know as your personal experience.
Geographically this same path is V1, various other Vx visual areas, angular gyrus, Wernicke’s area (and a bidirectional connection to Broca’s area to aid in parsing), Inferior temporal gyrus, middle temporal gyrus to the temporal pole. For the purposes of this exposition, I will ignore the sound, touch proprioception, and the vestibular systems but they do fit well into this general scheme.
A side note: The Retina model of the cortical IO company corresponds very closely to Broca’s area and Wernicke’s area. Each of the areas mentioned in my exposition could be at about the level of processing complexity described in the Retina model.
The forebrain is simultaneously driven from the body need sensors in the limbic system with various projections around the bottom and sides. Yes, like on the sensory streams, there is a similar general hierarchy in the forebrain - from the frontal pole with the terminus being the motor drivers along the central sulcus.
The temporal pole and forebrain are bi-directionally connected, (need state) in the forebrain to (current sensed state) in the temporal lobe.
As the forebrain is bouncing through the interaction between need/solution space and sense space (see below) this is unwrapped in the planning parts and after one or two stages of processing arrives at the area symbolically superior to Broca’s area. This is the motion planning part that animates the speech production areas.
Let me expand on this as it is a key concept: At several points, there are connecting loops from the forebrain to points in the sensory stream.
One of these is all-important to the concept of self-awareness: the nerve bundle connecting words to grammar (arcuate fasciculus) is about in the right place to inject forebrain planning as a sensation.
As you form and process plans these are unwrapped and projected at the appropriate point in the sensory stream so that you are aware of your own plans as if they were some of your sensed environment; these are injected into the sensory stream and processed as if it were external stimuli.
Think of the implication of this – this sensed dialog is then processed through the same temporal lobe evaluation and presented back to the temporal pole to influence the connection between the temporal lobe and the forebrain, forming a continuous stream of consciousness thought.
Since it is injected at about the level of speech you can talk to yourself and then evaluate what you said as if it were someone else talking. Just as many people read without verbalizing you can perceive this sensation without necessarily forming actual words; this does not preclude perceiving this stream as words. Again - just like the reading example - even people that can read without verbalizing can make out the word sounds if they want to.
This self-awareness loop, from the forebrain high-level planning back to the sensory stream and then back through the evaluation/declarative memory area in the temporal lobe, is what I see at the basic mechanism of consciousness.
Seems to me intelligence refers to potential (i.e. network), while consciousness refers to the actual, ‘simultaneous’, sustained experience of present inputs, past associations (memory), and predictive states.
This discussion is really exciting, but whatever consciousness is, I’m thinking that the relevant questions here are, first, whether consciousness adds to intelligence any critical component and then, if yes, whether it involves “non-SDR-isable” mechanisms.
Referring to my prior response - consciousness is the process of evaluation of the sensory stream and feeding back some level of planning resulting from the evaluation of the current sensory stream into part of the sensed stream, forming a loop of experience. The experience is weighted for relevance and recorded as personal biography.
Attention is an orienting mechanism, and I think it fair to say that this orientation is driven by several weighting factors. Some would be simple cues like motion and some are weighted simple evaluations based on internal need states and pre-programmed primitives at the limbic system level.
Intelligence - the quality of the processing of the sensory stream (internal and external) that ties that stream to prior learning in useful ways. The end result is the selection of the best action for the perceived situation. Some of this quality is speed, some of it is relevance, some is the ability to search through higher level connections dimensions to select the best internal elements in mental manipulation space. I think it is fair to say that the manipulations themselves are stored sequences based on experience and genetic programming; that would be what areas are connected to what, how heavy those connections are, and how fast and broadly the local maps sync to the presented representations.
Two different people may both experience the same thing. They may both orient on the same thing. One may recall more about the thing & sus out its pertinent facts and then tie it to relevant actions faster and finally select better choices based on that perception.
It really could be genetic as it’s partially based on connections and various parameters of neural network layers, IE. how much training promoters, how dense the dendritic mats, how robust the connecting loop between two maps, … It’s a massive list of variables and it would have to affect the functioning of the finished product.
And it must be partially cultural. The type of experience offered, the rituals learned such as study and exploration. The programming of patience and self-control, the learning and honing of skills. The training in good STEM tools. The connections may be there to provide the raw substrate but it has to be filled with something useful to recall as needed.
You have to take the sum of capabilities and programming to evaluate the entire machine.
These internal actions of perception and choice of appropriate action are commonly thought of as being intelligent.
Do you have a link to the discussion of the transforms in abstract space?
I assume this happens just before the motor drivers.
There isn’t really an in-depth discussion on this, but It has been brought up several times in conversations about sensory motor integration. I don’t really have a list of links though. You can probably find them by searching the forum though. An example: Why Does the Neocortex Have Layers and Columns, A Theory of Learning the 3D Structure of the World
By mistaking meta-consciousness for consciousness, we create two significant problems: First, we fail to distinguish between conscious processes that lack re-representation and truly unconscious processes. After all, both are equally unreportable to self and others.
There is evidence that a reduction of integrity in the arcuate fasciculus is related to auditory verbal hallucinations in patients with schizophrenia. This seems to be consistent with your theory because (schizophrenic) patients with an arcuate fasciculus that is not able to work at its full capacity, “hear” the voices of other people and have a hard time maintaining a stream of conscious thought.
I think that consciousness as defined here:
does add a critical component to intelligence. Specifically, I think it adds the ability to evaluate and modify one’s own thoughts and behaviors.
I have a hypothesis about consciousness. I came to these ideas while studying the Basal Ganglia (BG) and Reinforcement Learning (RL), so I will first describe how I think the BG & RL works. I’m going to assume you all know how RL works.
The BG is thought to perform RL. The BG is composed of two primary structures: the Striatum and the Globus Pallidus (GP). Every area of the cortex sends axons to a corresponding area of the Striatum. The Striatum sends axons to the GP, which in turn sends axons to the Thalamus.
- The purpose of the Striatum is to find valuable things.
- The purpose of the GP is to weight the valuable things which the Striatum found and determine the net Expected Value.
- The purpose of the Thalamus is to modulate the brain in such a way as to maximize this expected value.
- You see a tasty apple with a small bruise on it. The Cortex outputs an SDR representation of this visual scene.
- Your striatum transforms the visual SDR into an SDR which encodes just the apple and the bruise on it.
- Your GP has associated a (positive) weight with the apple and a (negative) weight with the bruise on it. Your GP outputs the net result.
- Your thalamus directs you to either eat the apple or not.
The Striatum gives feed forward input to the Pre Frontal Cortex (PFC).
Assuming this is true, then the PFC is an HTM which is analysing all of the good and bad things in your world.
While the sensory cortexes see the world as it physically exists, the PFC sees the world as it matters to the animal. The PFC uses as feed-forward input the things which RL has deemed important (this RL process happens in the Striatum).
This seems like a useful capability for any animal to have.
Continuing the Example: When you see an apple, your visual cortex sees a round red object while your PFC sees tasty food.
I’m not sure how this hypothesis leads to “consciousness” but I can see how this would allow you to be more aware of your emotions. Without this you can only see what phyically exists. With this you can see things which don’t physically exist, but which are still important. Things like abstract ideas.
I’m not aware of any single task which can’t be done without a PFC connected like this. The reason is that given enough trys RL should figure out most problems, even if the RL is using only the sensory information in the rear half of the brain.
It stands to reason that there exists an area of the PFC which receives feed forward input from the Striatum and is also valued by that same area of the Striatum. I wonder what’s happening in this area of the brain… In order for this area to reach a stead state of activations, it would need to form a stable feedback loop between the PFC’s analysis of the world and the estimated value of this analysis.
Or consciousness maybe experiencing exactly those as an illusion such that what goes on is realized just after it happens and it feels like we are doing those instead of just realizing we already did those.
Let’s see if Ms. Blackmore can rearrange some of the furniture in your mind!
See her in motion:
Very interesting talk about the OP:
Christof Koch, Allan Institute for Brain Research
“Intelligence is becoming, conciousness is being”
“Conciousness arises from heterogenity and integration”
This seems like the perfect place to bring up something that I’ve been trying to understand recently. Maybe someone in here can help me resolve an issue I seem to have.
I’ve been watching lectures given by Don Hoffman, because I find his ideas and theories very interesting. He is so incredibly precise and educated. Obviously, I cannot articulate his theory nearly as well as he or probably others can, so keep in mind that I am likely missing crucial detail for presenting these ideas. So, hopefully we have some folks familiar with his work.
Most scientists, probably including everyone at Numenta or in this community, hold a view of reality Hoffman calls “hybrid realism”. Hybrid realists believe that some perceptions, such as colors, taste, and smells, are not actually a “veridical” aspect of objective reality. We experience them, and we can communicate when we experience them, but they do not exist outside our perception. However, they believe that other perceptions, such as objects, motion, and time, are indeed veridical aspects of objective reality. For instance, he quotes Galileo as saying “I think that tastes, odors, colors, and so on are no more than mere names so far as the object in which we locate them are concerned, and that they reside in consciousness. Hence, if the living creature were removed, all these qualities would be wiped away and annihilated.” So, most scientists go with the idea that, even though certain aspects of our experience don’t actually exist outside our brain’s own construction, we can still perform scientific experiments and trust our senses to give us insight into the veridical structure of reality because space-time and objects exist beyond and independent of perception. Therefore, the human species can develop scientific theories about a material world that roughly reflect objective reality.
Now, according to Hoffman’s theory, called interface theory, NONE of our perceptions are “veridical”, or true, about the nature of reality! Space-time and physical objects do not exist in the “real world”. He utilizes the logic behind the theory of evolution of species to arrive at the conclusion that our perception evolved by effectively hiding the mathematical structure and algorithms of objective reality in order to more efficiently allow us to make decisions based on the fitness value behind the objects of our perceptions.
So, he states that human perception evolved just like every other living organism, not because we see the world as it really is, but only because our ancestors survived and reproduced. He has mathematical equations and computational experiments (evolutionary game theory and genetic algorithms) that support the idea that organisms with perceptual systems “tuned to fitness” always outcompete organisms “tuned to truth”, causing the latter to inevitably goes extinct. So if both scenarios are plausible, truth will always lose to fitness. That does make sense to me.
Now for the issue I’m having, if interface theory is true (the argument makes sense to me), this would mean that our perceptions don’t provide any means of knowing the causes behind them, such as how we, as conscious agents, make decisions and perceive what we perceive. Thus, when neuroscientists study a brain, for example, and they notice things like sparse neuronal activity, laminar structure, dendritic trees, etc., these can only be correlated to mental activity. They are not actually causing intelligence or consciousness to exist. As Hoffman would say, these are only “icons/symbols” in our evolved perceptual interface. Something ontologically different, not neurons, is causing intelligent behavior to occur.
Assuming I even actually understand Hoffman’s interface theory, I feel like it conflicts completely with our scientific endeavor to reverse engineer the brain for principles of intelligence and conscious experience, right? Therefore, it seems like we must hope it’s not true. I’m still trying to reconcile these two things myself.
Maybe someone in the community could help.
If you want a good presentation of his theory, see this video: https://www.youtube.com/watch?v=dqDP34a-epI
I have a working theory that many of the “mind-boggling” theories are put forward in such a way as to sound more profound than they really are.
This comes up a lot in philosophy, social sciences, quantum mechanics, psychology, cosmology, and art.
I have been unable to determine if this is to impress the coeds or to get tenure or to gain notoriety or some combination of these goals.
I am perfectly comfortable with describing all three of Marr’s levels of description and applying them to a variety of systems.
This applies equally to silicon computers or wetware.
When we veer into “hard problems” - well - see my opening statement.