Language capabilities of humans, according to HTM

Not as an answer, but maybe as a reference frame: The Conscience of Color, from Chemistry to Culture – Brain Pickings

Yeah, I should have clarified that I think things like that wind up getting handled at a much higher level of reasoning and HTM is insufficient towards that end right now.

I believe that reference frames are relevant to all sensory modalities.

I choose to share your belief – but still I wonder, for example, how can we relate the sense of smell to reference frames – care to come up with an idea of something like a hypothetical circuit? I am REALLY curious here… :slight_smile:

Reference frames conjures up location and as you pointed out - there are parts of the cortex that process data that is not spatially oriented.
Try “context” for less spatial parts of the cortex.

3 Likes

Don’t get hung up on the idea that grid-cells are only useful for encoding spatial adjacency relationships. As described in the latest Brains@Bay meetup, there is strong evidence that the grid cell mechanisms may also be able to encode (or assist with encoding) arbitrary relationships via learned transitions. It is also fairly likely that there exists a similar mechanism that allows us to encode and recall temporal sequence adjacency. The current TM model does this to some extent for immediate sensory successor/predecessor relationships; however, I’m referring to a the mechanism that allows us to project actions and behaviors forwards and backwards in time over much larger increments.

3 Likes

So you mean to say that the idiom is just the human way of representing that abstract/high-level thinking that relates to emotions or sensory inputs such as the feeling of wetness in the air and memories related to a wet day?

If so, that sounds awfully complicated to have such diverse abstract meanings of a simple idiom, yet it is common enough for most humans to understand its underlying meaning?

Most processes and complex thoughts we have are in the higher-up hierarchies. Isn’t the aim of HTM to meaningfully replicate those processes and use that in pursuit of its AGI goal?

I think the aim of HTM was to try and replicate much of the CC network, and then use that to get to the other goals. Considering the amount of other brain components outside the Pre-frontal Cortex involved, I don’t think HTM’s explanation of CCs alone gets there. That said, I think reproducing a lot of CC functionality will get us close enough, and I think getting to that point we’re going to solve a lot of the language issues (because you’ll have to, to get there- the path is paved with words).

1 Like

That seems to be well said …

As an example, think of how MUCH we humans like to use acronyms … when we come across the word “ASCII” during casual reading, do we stop and think about the specifics like “American, Standard, something …”? probably not. The concrete meanings represented by “A”, “S”, “C”, “I”, “I” seem to all get by-passed, leaving only the abstract meaning of “a character encoding scheme” in the mind.

2 Likes

Alright then, we will see how it progresses. This is such a basic concept that failure to account for it may be one of the biggest arguments against HTM

This is such a basic concept

I would put language a lot farther down the line towards the “hard” end of the spectrum, if that’s what you were referring to. Honestly, I think it’s hard parts all the way down, lol.

Great subject and questions, of very fundamental importance to higher reasoning. I just want to add one important point to this thread. The TBT and its underlying HTM concepts of the neocortex have a more foundational aim. TBT and HTM aim to explain how our neocortex builds a model of the real world. Without a model of the real world, there is no purpose nor foundation for language. The semantic elements of language require a model of the world as an anchor to then move on and convey higher level ideas, which are usually transitive in nature. At its current stage of investigation, TBT is trying to explain how that model of the world is created and how we disambiguate at different levels in order to eliminate uncertainty and create stable perceptions. Once that is understood, we have a long path ahead to understand other aspects of linguistics, but we probably will have found the set of tools our brain employs for such higher level processing. I personally am quite sure we will also have to learn a lot more from linguistics, not for specific languages, but regarding a better understanding of how language acquired properties and resolves complex communication and establishes higher order relationships of concepts. With a solid understanding of TBT models and disambiguation and some additional knowledge from the field of linguistics, we will have a clearer path to follow, than is apparent at the moment.

4 Likes

Let me also add a few terms that should outline the roadmap towards understanding natural intelligence. We have the “REPRESENTATION” problem for knowledge. That is probably the most fundamental and important problem in the puzzle of human intelligence, to be resolved. And that is correctly, what we are focusing on. Sparsity and frameworks are key points for representation, that this community has made very important contributions towards. Then we have the “BINDING” problem. This requires that we understand how multi-sensory (motor-sensory) perception disambiguates and establishes a stable, consistent and multi-contextual model of reality. We are also making great progress on this front with TBT. Next we will have to confront the field of “SEMANTICS”. Semantics can be broken down into at least two components. The “static meanings” of the “physical world”, like objects, mountains, people, places, animals. And secondly, the “transitive, changing concepts” involving “actions” between these physical objects, or upon them, or changes in state resulting from other actions. We are “causal” in our way of thought. We convey causality in our language, probably for evolutionary reasons. Once we have explained how semantics are handled in our brain, the remaining elements of intelligence, at the levels of logic, reasoning, planning, intentionality, empathy, reflecting etc. will not be as difficult to understand in terms of neuroanatomy. That is my take on this great challenge, we also call the “hard problem”.

Please also take note, that I have avoided the term “Consciousness”, for I am now absolutely convinced that this term has varying definitions for many different people. It would be senseless to discuss a term that has differing definitions for the parties involved in the discussion. And to complicate matters even further with this term, “consciousness” also seems to have multiple levels of manifestation. We often start with a multi-level model of consciousness in the medical fields, like perception, awareness, self-awareness, understanding, empathy, etc. I am convinced we will be able to explain consciousness very well, once we agree on its definition and perhaps the levels we are referring to.

4 Likes

Our mind’s model of the world is intrinsically hierarchical. It could be highly abstract and composite, in a large portion.

Could this model have come into existence without (or independent of) language?

In other words, our model of the real world might be intrinsically coupled/entangled/ with “language”.

For example, names of objects (i.e. Symbols or Labels) might be the most basic elements of a language. Their usage naturally enables the mind to perform abstraction and generalization: upon hearing or seeing the name of a known object, the mind can think of the object (e.g. a coffee cup) without having any sensory input about its concrete properties (color, shape, hardness, smoothness, etc.).

That means in the neocortex, a very limited number of neurons (representing the abstract concept of a coffee-cup) can now fire in isolation, without the visual/physical perception of a real coffee-cup and related sensory-motor circuitry being fired.

So the question is, is it possible for TBT to explore the ways our neocortex builds a realistic model of the real world successfully, without involving language modeling, or at least some modeling of the most basic elements of a language?

For example, in the sensori-motor processing of recognizing a coffee cup as an object, what if the Temporal Pooling layer output a label such as “coffee_cup” to represent it, so possible later-on processing can handle content like “1 coffee_cup on the table” or “2 coffee_cup(s) on the table” compositely, instead of treating them as two independent representations?

Of course this immediately involves hierarchy, which is not currently on the Numenta development roadmap. Could/Would/Should be on the radar? Just curiously asking.

1 Like

A human without language is not very human.

1 Like

[quote=“JJC, post:19, topic:8668”]
Our mind’s model of the world is intrinsically hierarchical. It could be highly abstract and composite, in a large portion.

Could this model have come into existence without (or independent of) language?

In other words, our model of the real world might be intrinsically coupled/entangled/ with “language”.

For example, names of objects (i.e. Symbols or Labels) might be the most basic elements of a language. [/quote]

Sequence memory. Recognising objects give you the nouns but not the verbs and other parts of speech, and it doesn’t give you a means to utter them. Sequence memory gives you passage of time (for verbs), grammar (word ordering) and utterances (find coordination of sequences of motor outputs).

2 Likes

It is so well phrased - I wonder how many researchers working in “knowledge representation” have grasped it… oh well.

Thank you sir for the link – it’s a fascinating read for me.

1 Like

After some reading & pondering, I now see the significance – sequence memory seems to be the key to many things, so thanks for the comment.

Among many questions I have regarding sequence memory, the most immediate one is: what are the existing theories on how the brain encode sequence memories in STM (short term memory) vs in LTM (long term memory)?

Obviously STM/LTM are related (LTM has to come from repeated refreshing of STM, even in dreams).

Yet they are different – there are STMs we necessarily wipe out as a routine (such as where we parked our car yesterday: we don’t want to permanently keep in mind of a parking-location history through our life time), and there are STMs we struggle to turn into LTM (e.g. when studying something, like HTM or ML in general, we wish we could remember all we have read and understood).

Is sequence memory achieved by simple & straightforward synaptic plasticity? – or does it involve some fancy & heavy numerical calculations of high-dimensional embedding vectors?

I suspect the former, but want to educate myself on the state of the art in this area, so any further comment/insight from you will be truly appreciated.

2 Likes

I have also pondered on this basic question for years and have come to the conclusion that in order to understand our cognitive function (especially in terms of hierarchical layers) we have to always take a look at the evolution of our species and the evolution of our brains. (Perhaps even the evolution of all mammals as well). Because it is quite apparent and clear that our brains have added layers and functions over time. It is therefore my opinion, that when it comes to understanding how our brains build a model of the real world, you have to start before we had language. I am quite convinced that Numenta (and cohorts) have begun in the right place, with HTM. Our models of reality have to begin with a set of what systems engineers would call “primitives”. These are like foundational perceptions of very directly observable and palpable and acoustically hearable objects and environments, or settings. Objects like trees, bushes, rocks, mountains and rivers each have a set of common observable features that allow us to identify them (with high generalization accuracy). So our perceptual disambiguation capabilities for such “primitives” almost certainly developed without the need for language at all. But this fundamental model of reality, has a lot of complexity already. We could distinguish objects, their states, their relative positions, their motions (as in clouds or water) or physical behaviors (in the case of animals). We can also recognize some causalities, like the effects of gravity and getting wet when it rains. So a “primitive” cognitive model of our surrounding reality must be possible, independent of language. And we also even have, what is usually termed as episodic memory (also phrased here by you as sequential memory). Up to this level of perception and cognition, we developed without any language.

But I very emphatically agree with the statement, that “What makes us human requires language”. There is no doubt in our archeological and genetic records that our human species began to evolve under new conditions of complex social interactions around 300.000 years ago. Agriculture is probably only 20.000 years old and urbanization did not begin until around 7.000 years ago, but social interdependence in small groups with specialization is much older. A period of 300.000 years has most certainly added layers to our cognitive function and language also evolved in step with our abilities to abstract and semantically cluster the “primitive” foundational models we had already previously evolved. I am therefore also quite certain, that our current “truely human” models of reality also include a set of added cognitive functions and extended our model of reality, perhaps hierarchically, perhaps with multiple branches. Concepts we use that refer not only to a object and its behavior (like a dog or a horse) are extended to include its intention and its preferences or choices. Such ideas require language. They also come from social interaction and self-reflexion. Empathy, was probably there in some form, like in dogs today, prior to our acquisition of language. But our ability to make very distinguished considerations, probably came with language. However, TBT suggests alternatives to strict hierarchies. As I wrote in my indepth review of “A Thousand Brains”, TBT does point out possible mechanisms that rely more on set-theory and graph-theory for associations, than on a strict hierarchy. But that is the subject of research to come. A very exciting journey awaits us, on this path.

3 Likes

Such elaborate and compelling discourse! I certaintly do not have any objections, yet I cannot help wondering, out of curiosity only – a well-trained dog can understand quite some words (mostly nouns & verbs, rarely adjectives, certainly no pronouns …), so if we consider that as “the use of symbols” in a dog’s mind, though at a very pristine/rudimentary level… Is there any significant difference between the brains of a wild dog who’s never had contact with humans, and a well-trained service dog?

(Link credit to Bitking)

Ildefonso (the deaf person who grew up without language) cried in overwhelming joy when he learned (in his 20s) that everything around him has a name (through sign language) – on the one hand, this obviously support your opinion that “a “primitive” cognitive model of our surrounding reality must be possible, independent of language.”, as Ildefonso must have established one such model before he learned (sign) language in adulthood.

On the other hand, it also inplied that language could be an “add-on” to that primitive model, which while implying that it can come “afterward”, can also imply “it is a separate module/model … kind of”.

If so, why not develop a primitive model of symbol usage in TBT/HTM theory, as an “Add-on”? “primitive” implies it might be easy to achieve - just the very basic stuff, like naming objects, like being able to see “one coffee cup” vs “two coffee cups”… which requires assigning symbols to recognized objects, being able to represent count numbers (as capable as a 3-year old?), being able to distinguish STM (number of cups observed) vs LTM (cup as a learned object represented by a specific one or group of SDR(s)) … it might open a door to wonders.

Just thinking out aloud.

2 Likes