The symbols should be at the same level of representation; I find it difficult to imagine that different levels of abstraction could exist in the same map. I see it more as the map in that area could represent more symbols at the same time. You can keep more than one “chunk” of information in your mind at the same time - this could be the mechanism that makes that possible.
But why do you think how you feel about something is a different level of abstraction…I think it’s just a different kind of abstraction. Why couldn’t emotional data manufactured in the limbic system be fed into the neocortex as an input stream just like vision. HTM Theory says it doesn’t matter what the data stream is so emotional patterns could be stored as memories or feature data in exactly the same way as any other input stream.
I was just thinking about it and it seems to me that it could be that emotions don’t really exist? (Just as I believe “thinking” doesn’t either…).
If we’re really honest about it, what passes as thinking usually is going over the “known” over and over again until a new insight “shows up”. It’s not like we do anything physical which directly results in a thought - they just show up.
I think emotions are even more elusive because they are always in the domain of an observer (even if it’s ourselves), as a characterization of physical sensations we have and behavior that is the result of those physical sensation’s exhibiting force on us in a particular behavioral direction.
So if emotions are labels we put on behavior resulting from bio-chemical changes which we interpret as emotions - then my question is (assuming what I’m saying is true), why would they (emotions) require any cognitive processes?
Sure, they require cognition in order to “observe” them - but produce them or experience them?
I make this claim because I have been looking at the topology of the connecting fiber bundles.
A map only knows what is projected to it. For example: V1 is receiving bundles of fibers from the eye so I don’t expect to see it processing auditory information.
The functions of various areas of the limbic system have been mapped, as have the connections between the limbic system and the cortex. I don’t recall seeing connections between the recognized cortical hubs where grids are formed and the parts of the limbic system thought to form emotional coloring.
These limbic projections target the lower frontal lobe and the temporal pole. These are the areas where, respectively, high-level planning and evaluation of your episodic memories reside.
One thing I would also like to comment on is what appears to be a “collapsing” of scope or dismissal of scale or level at which we are working?
What I mean is this…
From a single input bit to the internal verbalization of a concept or idea probably results from maybe hundreds or thousands of “layers” of SDRs provoking pathways resulting in more SDRs until distinctions are refined and combined with many senses; later resulting in guttural sounds; then words; then concepts and ideas?
We always talk about cortical processes as if we’re at the very “top” of the assembly and propose analogies to very complex concepts and abstractions when those things are probably the result of thousands of combinatorial processes occurring before we arrive at the complex concepts we ascribe to the SDRs we’re speaking of…??
For example, we talk about the conceptualization of a “Cup” and it’s constituent parts and the SDRs which contribute through their combination with other feature SDRs to their formulation as a formal “Cup” concept…
…when really we are at a very very very preliminary “scope” at first and probably have to deal with things such as what it means to “touch” something, detecting it’s smoothness and temperature etc. We probably won’t come to anything that can be distinguishable as a “thing” conceptually until HTM Theory is able to formulate thousands of layers of combinations to even arrive at a “word” for something? Maybe?
Edit: My personal opinion is that what we’re actually dealing with is a repeatable cognitive heuristic which will eventually result in an emergent concept after many thousands of prior combinations even wayyyyy before distinctions get characterized in language?
We use analogy to describe complex concepts so that we can “envision” the flow and assembly of units of cognition - but it seems that we forget that we’re not yet at that “scope” yet when we’re actually at a very basic level at the moment?
I just thought it might be useful to point that out?
You are making this too hard. The “hundred step” principle puts a very hard upper limit on how much is actually being done from perception to response. Everything from the formation of a “global workspace” to the initiation of action has to be a fairly short process.
Perhaps very wide - but very short.
But actually I’m not making any statement as to how complex the process is - just that there may be some benefit in being more “attentive” to the rigor with which we talk about the cognitive processes we’re currently working on? Basically that there is a looooong way to go before we even deal with the distinction of “things” because that requires the integration of language - and to me we’re at a much more “preliminary” stage in our development, maybe?
But anyway, I don’t mean to obfuscate the conversation?
From my reading, I find that the neuroscience community knows some things in a continuum from excruciating detail to a “vague fuzzy feeling.”
From fMRI studies, we can tell things like a given word or activity activates a particular region or regions.
From tract studies, we can tell to a high degree what regions are connected to what regions and the relative density of those connections.
Developmental studies have shown how the connections form in the developing animal.
Lesion studies have done much to catalog and localize functions. Man (via war injuries) has been conducting highly detailed focal injuries experiments to extend this lesion damage knowledge.
From microscopic studies, we have some very good ideas on where the connections are made in the cortical sheet. Perhaps 20% of the neurons in this menagerie have some working theory on what function they provide - many that have strong support via “in vivo” studies. These in vivo studies have received an amazingly powerful tool in light activation and/or light emission genetic manipulation.
Psycological studies have done much to give some good “black box” descriptions of what tasks are being performed by the brain. Other studies have done much to elaborate the order that the brain learns and exhibits these behaviors.
Interspecies comparisons add detail on what functions go with what structure and configuration of the structures.
So - how much is “known” depends to a great degree on how much you are willing to dig and integrate. Whether something is a “preliminary” stage or some more advanced stage may depend on your personal journey. Until someone puts it all together “we” may not know how long that journey is.
It never fails to amaze me that once I get some question in my mind regarding neurobiology I look and - lo and behold - someone has been researching it. There it is - laid out in research papers replete with tables, graphs, measurements, and references.
I think that we are in the same place as chemistry was in 1869 when Russian chemist Dimitri Mendeleev started the development of the periodic table. Once we have the “periodic table of the brain” everything may fall into a framework that makes everything make sense. I predict that we will discover that we really did know the answers - we just did not know how to fit the pieces together.
First off, I just want to say how nice it is to have persons with a high level of expertise to summarize the current “state of the union” - so for both your contributions and those of other experts in this forum - thank you!
I hope my comments didn’t imply that there aren’t advancements being made or discoveries and accomplishments for which we can be proud? But in reference to this:
…when it comes to being able to take very elemental sensory input and describe thoroughly how particles of cognitive knowledge combine to form concepts, and how language is integrated into that; and how behavior is generated with insights into how meta-cognitive processes are invoked (such as our internal conversation); where in the neocortex this occurs and how elemental knowledge is combined to form complex knowledge and where and how that is happening - I feel (and I always say it is “my” estimation), that we are just beginning our journey.
But yes, I acknowledge that to actually have a handle on what is “known” requires a level of dedicated investigation I am not currently engaged in, nor do I currently have the “acumen” to recognize it even if I came across it!
Nevertheless, I kind of doubt we are at the level where we can use high-level human behavioral and conceptual analogies to analyze the coherence and accuracy of HTM Theory? (i.e. sequential vs. spatial and allo-centric vs egocentric processing are more fundamental than that?)
– which is all I am saying…
Here is a little nugget that may push you along in your journey:
My apologies, I am having a hard time really grappling with the plumbing here but I very much appreciate the dialogue. I am looking at this from a practical psychology point of view and I find your research both fascinating and helpful in forming my thoughts about how to turn this into a theory about how to optimize human learning. I am stuck on this emotion thing because I can see that it is intimately linked to human learning and I have begun to see some similarities between the general approach to patterning and prediction in the neocortex and patterning and prediction in emotional state.
The following TED talks are well worth a watch in terms of understanding my thinking. I think that perhaps it is difficult to think in terms of emotions because we develop language fairly early on in human development and from that point emotions seem somehow to interfere with logical and analytical thinking, although I think they play a much underrated role in making intuitive leaps and such.
Without getting lost in the plumbing, I guess what I am wondering is if emotions or emotional state could be patterned in the same way as other data streams to return a predictive emotional state. I ask because I am becoming intensely aware that this seems to be what happens in many of my students and also family members. I believe that emotions are the semantic representations of our interactions with the world before there are words to make these semantic links, or in other words emotions are the way we make meaning out of the world before we develop language as a more precise means to do so. Consider a zebra escaping death at the hands of a lion, during the event the zebra is operating on current state compared to last state and decisions to turn left of right or whatever else might be an option are being made lightning fast with no opportunity to make any kind of meaning out of what’s happening. However, at some point when the zebra reaches relative safety shouldn’t there be some kind of patterning of the event to help make the correct predictive response (from a behaviour perspective) the next time a lion is encountered. Since the zebra has no word to describe a lion or its surroundings or anything else to make meaning of the environment, couldn’t a primitive kind of semantic representation of these things be patterned through emotional state at various points during the encounter. Does the zebra feel a kind of exhilaration at having escaped and would this not attach some kind of semantic meaning to the encounter. Would the zebra feel a kind of fear when next confronted with something that most resembles a lion in the future which would initiate evasive behaviour? Could the brain create an SDR which is used to compare the current emotional state to the last and to make a predictive emotional response. This would explain a great deal about how and why people fight with one another, especially when the fight seems to make no sense. I think that emotional response to the environment is patterned and used to make predictions about what our emotional response should be and the fact that we have not defined emotions in language very effectively means that we are terrible at accurately predicting the emotional state of another, hence misunderstanding etc. From my own observations I have seen that a predictive emotional state can actually impair an individuals ability to decode language to the point that the words being spoken are in fact irrelevant even when there can be no ambiguity about their meaning. I see exactly the same thing happen in written communication when the tone of an email is predictively assumed rather than the words being taken at face value. As Lisa Barrett points out, emotions are constructed in each individual and are entirely context based so the idea that I could accurately predict someone else’s emotional state is a bit of a red herring. Language is also context based in that while we can agree on the meaning of common words because most of us are exposed to them on a regular basis, defining a word from context, or making an inference is dependant on the level of exposure to language and the patterns of language (something I was intensely aware of as I tried to wade through some very academic papers on a variety of topics, none of which I am an expert in). I believe it was Bitking who made the analogy of a really dumb boss with a really smart advisor, I would like to extend the analogy a little further here and suggest this is much like the Captain Kirk and Mr. Spock command structure which is so entertaining in Star Trek. We need Captain Kirk to make intuitive and rash decisions when time does not permit a thorough analysis, and yet, if we make it out of this predicament we will need Mr Spock to figure out how that happened after the fact and make a prediction about what to do the next time we suspect the romulans might be lurking. I think emotions play a pivotal role in human learning because they are the mechanism by which we often decide whether to continue to play with novel information (and thus pattern it) or to avoid it because it might be dangerous or, more often, unpleasant. Any thoughts would be appreciated and I hope my avenue of exploration is not derailing anyone’s thought process.
I don’t see why not. I think emotions contain information, and they are part of the data flow.
I think that you might find this interesting:
I don’t think of emotions at a separate sensory stream like vision or hearing. I think of it more as a “smell” that you learn along with whatever experience you are having that flavors that experience.The more profound the experience the more intense the flavoring and the greater the learning rate of that encounter - for good or bad.
There are good reasons to expect that part of the purpose of the hippocampus is as a buffer for experience to be colored by the outcome of that experience as it is consolidated and transferred to the cortex.
Your learning as you explore your environment builds a vast landscape of emotionally flavored micro-experiences. This is the essence of exploration and play - building this catalog of emotionally tagged experience. This is why we seek novelties - to add to our useful store of knowledge of the world. It’s instinctual and every bit as powerful as other drives such as grooming or seeking shelter.
I am certain that this exploration extends to the social sphere. We are social animals and must learn our place in the leader/follower relationships. I suspect that there is a lot to know about how dating and dancing work in this context. Children can be unspeakably cruel to each other as they sort these things out. All of these experiences are colored with emotion by our innate lizard brain as we learn them.
The “finished” results of your exploration are what you draw upon when you encounter the lion; you use the crystallized sum of all your prior explorations. Assuming you do survive your stressful encounter whatever you did to survive is stamped hard in your memory - it worked! Such is the fuel of PTSD. If you did not survive whatever you did was not working and does not need to be remembered.
In relation to your example of certain words or ideas freighted with emotion - the tribal experience works to shade these tokens by context. I suspect that at some point this emotional weight overwhelms the meaning of the word to the point where some book definition is meaningless.
I see this a lot on this site. This is the same thing as the "grandma cell.’ What happens if that one cell dies? You would forget whatever that cell knew.
I don’t think that this is how the brain does it. I think that the experience is spread over many SDRs.
I think that the way these things work is to spread a little bit of meaning over a large topographic space. This is sometimes called a “distributed representation.” This takes some getting used to but there are many good reasons to think that this is how the brain does things.
A “classic” exploration of the concept:
Another - perhaps easier to follow:
And yet another:
In my opinion emotions are labelled onto certain cognitive processes and patterns and the domain for an emotion expands as the system keeps on observing. The encoding might just be electrochemical processing but without certain cognitive patterns that define each emotion, the electrochemical staining isn’t of any use. Think about love. There are certain patterns that are associated with love: certain actions that everyone does for their loved ones. Without the patterns the emotion won’t evolve and would probably just be as annoying as when you cannot express negative emotions such a anger and have to suppress them quickly. Just a speculation after what I read about action-perception circuits and how the motor cortex is used in creating abstract emotions, cited by @Bitking
Of course we are driven by our emotions what else do we have to go by? This is the core of psychoanalysis. The growth experience of psychoanalysis is to make emotional responses that are purely subconscious into emotions that are well connected to and controllable by the conscious mind. The saying is “where id is, let ego become”. By having a person verbalize their emotions and examine their emotions over and over they move from being a reflect to being a considered and chosen response.
2 posts were merged into an existing topic: Neuroscience newbie questions
So now I’ve read some Calvin.
I really like the way he writes, and he provides lots of interesting details and stories on each chapter. At the end of the day, however, I’m not convinced of the central thesis about grids.
I’m well aware that my reluctancy to accept it may be biased by the fact that such wave-based phenomenons, if intelligence turns out to be so inherently based on them, would be inconceivably more difficult to model that simple, almost-amenable-to-sequential layers and areas. Also, let me reiterate that I’m not an expert in any of this, so correct me if I’m misinterpreting something.
To begin with - but that may be something of a personal feeling - after reading “The Cerebral Code” and “How Brain Think” (another nice read, btw), I could not depart from the sensation that the darwinian process subtending his model was somehow shoehorned. Although quite intrigued at first with that proposition, by the end of this book, absolutely wanting to find all 6 darwinian-model-ingredients to the brain’s innerworkings looked like a hammer in search of a nail. A feeling even conveyed by the choice of the book’s very layout.
Inner-darwin aside, his reflexion upon the evolutionary side of the equation were insightful. And I see now why the idea of an evolving neocortex from a need of throwing accuracy matters to you, @bitking. Indeed he seems to have a point here, and the fact that his solution seems to involve his cortical grid concept tickles me.
To get this self-reinforcement mechanism turn out to produce waves interference patterns which he then sparse out as “high points”, would require, in my view :
- First,  a symmetry between connections to the proximal, “feedforward” part of the neighbouring neurons. Which does not seem to be the case as far as I can tell from my (limited) exposition to cortical columns flow diagrams. Most same-layer, lateral input from sensibly close columns seem to involve [edit, sorry] basal dendrites, which according to HTM is best described as resulting in a modulatory signal, effectively allowing cells that do have them to prevent cells that don’t from firing, all feedforward being equal.
- Second, we’d need a super-fine regulatory system for it to make any sense (maybe what he means by “automatic gain control” ? I’m not sure). I mean, cells should fire “whenever” a faint input ask them to, but also should fire “only” as part of the grid when surrounded by most of their 6 neighbors also firing ?
Also, 100-step ceiling would seem to be hit quite soon if a pattern carrying semantic value has to :
- settle from unsorted towards captured by an attractor
- propagate along the grid
- be detected as part of a spatiotemporal pattern
I’m also quite uncertain how the maths would solve out for the capacity side of the matter. I think I understand SDR statistical properties by now, but how lots of different attractors could share a common lattice I do not intuit well at all.
Besides, needing ridges and small passages and stuff for the diverging copies required for “speciation” left me a little puzzled, and long-range messaging stuff operating on similar copies also quite a bit. In other words, my curiosity about “H” was not really satisfied here.
All in all, thanks for those links, @bitking. As I said, even with such reserves there were still lots and lots of interesting details and things worth thinking about in those books.
Ah. Also. Something fun occurred to me here while reading the first chapter again :
One technological analogy is the hologram, but the brain seems unlikely to utilize phase information in the same way
or does it ? Grid cell modules and their “overlap” anyone ?