Bob: " I’m interested to know how the information is embodied in an SDR. What is it about the brain state that generates the ‘redness’, what is it that constitutes the mental content of a mental state?"
Welcome to my world. I am working to elucidate that exact thing.
I am making slow progress but as far as I can see - nobody is further along than I am.
If you can recall a key lawn party scene in “the graduate,” my whispered word of advice is “grids!”
Yes, Jeff’s goal is to create / recreate / explain intelligence. And an essential part of that process is clarifying what is, and is not, prewired v. learnt thru HTM. So I’d say you are still on topic.
But I do think survival mechanisms are behaviours or patterns of activity, modes of response, etc. Whereas intelligence partly consists of that ie making models / maps of escape routes or responding to a flood of testosterone. (May I indulge and list the 4 Fs: fleeing, freezing, fighting and …f… fff… fornicating!)
But a crucial aspect of intelligence is the way we instantiate the meanings that we intelligently model and manipulate. I’d say that the essential - and completely unknown - aspect of intelligence is the nature of the information which we use to think with. Look at all those computational models of mind. There are no end of AI programs in which semantically relevant symbols are manipulated with the most clever mathematical techniques eg Ogma’s use of Taken’s Theory. But no one seems to consider what it is one is manipulating. Jeff’s goal is to use the brain as a model to make an intelligent system. His approach is based not based on a mathematical theory but on how neurons fire. Great. But what are those neurons manipulating? Cos its not symbols ! Its not blue squares. And its not spikes [unless they are in Morse code ] So what is the ‘stuff’, the essence, of the meanings which HTM is manipulating. We need to know that to know how to make an intelligent system. And that, what ever it is, is prewired in us.
So even if I don’t have any answers, at least I can claim that we are still on topic.
I can’t speak for Jeff, but personally if I were trying to understand how intelligence works with the neocortex as a model, I’d probably not be all that concerned with how something like a particular color, smell, etc is encoded (considering that such things are handled by older structures of the brain). Those become relevant really only if I need to create a vision or odor encoder, but really shouldn’t have much impact on understanding the “universal intelligence circuit” that seems to be implemented in the neocortex and apparently common to all modalities and levels of abstraction.
Nope. {After allowing for genetic errors, glaucoma, etc} we all see the exact same thing when we are shown light at a wavelength of 700nm. Its a common mistake to confuse the high level, more abstract, products of HTM [such as models of coffee cups or one’s political preferences and the associations they arouse] with simple stimuli - colours, sounds, scents… In a project such as seeking to understand intelligence its quite important we separate the sheep from the goats. In a project of this nature we must strive to maintain a clarity of thought in such matters.
Have you seen that Chinese film, Three Kingdoms / Red Cliff ?
I’m not as sure as you are. I suspect that if one uses (mathematical) symbols to model data then one is limited to computations, and thus limited to building an AI. But if one can use what brains use - the meaning itself - then one can build something that would actually think; an Artificial General Intelligence. How does one build an AGI ? I’m here because I think HTM is the best approach, the application that comes closest, to answering that $Tn question.
I simply equate “meaning itself” to “semantics”, and that is the domain of encoders. I think that concept is understood pretty well at this point, and now the domain of the encoder writers. I may be wrong, but IMO new/better encoders are not going to make the cortical circuitry more intelligent, just usable in more diverse applications.
I am very aware of the attempts to use text or some other symbol as the basis of an Ai. None of these have been what I would call successful.
I am also aware of the various black box attempts to explain the brain from a philosophical point of view with introspective things like qualia. I am equally unimpressed with the success from these efforts.
As is often the case - I turn back to the brain to see how it’s doing it. We find that at the highest level where it communicates with the Limbic system we find grids.
Looking further, I find that Calvin has been there before me and has worked out many of the nitty-gritty detail of how these grids can work to produce much of what has been observed in the brain. His terminology and explanations need to be adjusted to match research done since his work but that is a minor issue - the foundation seems solid. http://williamcalvin.com/bk9/
Thinking about grids? Look at this and see if you don’t recognize the same pattern: http://williamcalvin.com/socns94.html
I am working to unify his work with what has been done in the last 20 Years. It’s not quite to the goosebumps level yet but I am getting really close to fitting all the constraints I know must be met to explain the known facts.
You are free to follow whatever path you think will lead to enlightenment - if this is yet another attempt to explain qualia then so be it. Let me know how it works out for you.
Are you implying that just the neurons representing reddish, in the cortical layers differ or do the input patterns coming from the eyes, pertaining to reddish also differ? If the former, then that shouldn’t make a difference in the perception and hence…
Given that all the cortical circuitry mainly does is learn sequence memory, the details of the patterns that are being learned should make a difference in the quality of inference and the “intelligence”.
Regarding the discussion about importance of prewiring, it might not be necessary for intelligence, but in our case, we are particularly intelligent in our way because of the way information is encoded and distributed throughout the cortical regions. Also how this information is relayed within the levels of hierarchy and within the entire brain. The pathways are also important.
Agreed. Keep in mind that there are virtually an infinite number of variations for how one might encode exactly the same semantic meaning in an SDR. Better semantics (whatever the specific chosen bits happen to be) will definitely lead to more intelligent behavior (that is what I meant by “usable in more diverse applications”).
My point, however, is that coming up with better encoders is really a separate process to understand than the one for understanding how the “universal intelligence circuit” deals with the encodings after they have been generated.
I could of course be proven wrong – it may very well turn out that a tight bi-directional integration between the two processes is essential. Will have to wait and see how the theory evolves
Another newbie question I was asked recently:
Paraphrasing: " You keep going on about the WHAT and WHERE streams. What is all this?"
Going up the progression from small to large …
synapse/dendrite/cell body/axion. (single cell)
clusters of cells and local inhibition. (column)
groups of columns/long range (0.5 mm) reciprocal connections/long range inhibition firing. (single cluster of active grid cells)
Assembly of 3 or more clusters firing. (a pattern of activation on a single map)
Projection of a pattern of activation to a distant map, probably reciprocal connection from the distant map. (multi-map spreading pattern of activation)
a brain-wide pattern of activation. (Global Workspace)
In these large-scale patterns, there are definite pathways that are well-defined streams.
Both the auditory and visual sensory input streams are parsed into (at least) two separate paths
What is parsed in the WHAT stream, and WHERE it is all about location in space.
Motion is in the WHERE stream, and all semantic information is in the WHAT stream.
I say that both are processed by Calvin Hex/Grid activation patterns. Finding the grid pattern in the spatial part of the entorhinal cortex was just a happy accident because it correlated with the critter wandering around in its cage. I have not seen anyone looking for this in the WHAT stream yet. From the work of Calvin, I fully expect to find the same thing going on in the association areas.
This is a promising paper on how the streams come back together. (WHAT*WHERE stream)
Note the progression of how the maps are populated as learning progresses. I also am VERY interested in the breakout of a separate temporal learning mechanism of the counter-flow from the frontal lobe in this model. It solves some very naughty theoretical problems and fills in some blanks in how the limbic system tames the cortex.
I am not sure about intelligent behavior, but I argue that the quality and type of encoding will also improve the system’s capability to infer(be intelligent).
Thank you, everyone, for your answers. I have a one more simple question.
AFAIK Numenta believes that we learn mostly by growing new synapses:
We also discovered why learning in the brain is achieved primarily by forming new synapses. This is a more powerful form of learning than modifying existing connections as practiced in deep learning https://www.experfy.com/blog/the-secret-to-strong-ai
On the other hand, STDP is about changing the existing connections:
Spike-timing-dependent plasticity (STDP) is a biological process that adjusts the strength of connections between neurons in the brain
Does it mean that STDP is not the main learning algorithm of the brain? (because it applies only to existing connections)
It takes a long time to grow a new synapse, isn’t it? Does it mean that I can’t, for example, actually learn a new word fast?
In other models, they start by connecting “everything” to “everything” with a very weak starting connection and then coming back and increasing that connection to a strong connection value as it learns. This is about the same thing as adding a connection as it learns in the HTM model.
Going to the question you did not ask: Then how does HTM model analog values? We decompose the item into micro-features that add up to an analog value. In HTM parlance we do this with an encoder.
I think what you are getting at is the limbic interaction with the neocortex…I’m no neuroscientist but I have a hunch that the limbic system contains a poor man’s version of the neocortex that processes emotional memory from the hypothalamus. I expect there are emotional sdrs contained in the limbic system which project into the neocortex to form what people call emotional intelligence. This emotional data about data is likely what you are describing when you talk about how a colour feels or the warmth of a note. Language emerges as a foregone conclusion as a need arises to convey information. If you think about the behaviour of more primitive animals as a primitive form of language and then add in the idea that emotions drive behaviour then I think it’s clear why emotions are so important. Our neocortex needs them because it evolved with them and initially learns from them…by relay through the limbic system…of course it’s just me spitballing here. I mean I can’t prove it…but if I’m right or wrong could someone help me out so I can update my sdrs?