I have to agree. Language took advantage of the structure of our brains. Information wants to be free and all that.
What about using fire, clothes, and cooking? All this were hundreds of thousands of years before language.
Donât get me wrong - I believe language is very important for the ideas development. As well as written language, printpress, cities, and internet. Also language definitely needed some structural changes of the brain. I only sceptical about its influence on basic principles of the brain organization.
The evidence when no language is learned on the effects on cognitive behavior are fairly clear and well documented.You donât have to go back in time before the common use of language to see this.
Perhaps I worded this poorly before - but you donât have to have all of the mental tricks we attribute to the human brain hardware to make and use tools; you can be very limited and use tools.
It seems that language adds a powerful layer of features on top of these basic abilities.
Perhaps we should agree on this point (even with some differences of understanding where is a correct balance ).
Beside this, what do you think about linear language construction vs highly nonlinear models of thoughts?
I suspect that all language is learned motor behavior.
So far the only real work product from the dadâs song group is the general agreement on this model:
- we are passively exposed to organized sounds. These are processed and learned in The auditory cortex.
- as we learn these sounds some are colored with emotional values. In the group We describe one of these sounds as the âsexy dadâs song.â In general - any sound may be recognized as special. For example - sounds related to feeding could be âmore foodâ sounds.
- the creature learns to control itâs physical hardware to make sounds. The learning is directed to the somatosensory and motor cortex areas.
- at some point needs - like hunger or hormonal drives cause some sounds to be a desired sound and the creature tries to use the control of the sound production hardware to produce this learned sound. We point to dadâs song in our example.
- as indicated earlier- this could be any valuable sound like social cues or naming of desired objects.
- fragments of the self generated sound are recognized as rewarding and drive further efforts to produce more of the learned sound.
- eventually an entire song is produced and recalled when the related internal drive calls for it.
- in human speech we analyze and store sounds with an efficient layered parsing system that naturally supports segmentation and abstraction. This allows additional flexibility of expression in speech production.
- our coincidence detection circuits promote the pairing of objects or emotions or internal need/satisfaction states to sounds.
This conversation made me go off and read about Koko (a lowland gorilla) and Kanzi (a bonobo), two primates who seem to have at least a basic understanding of human vocabulary concepts, and perhaps a passing understanding of grammar (complex language sequences/utterances); Koko doesnât seem to use grammar or any particular syntax, while Kanziâs communication seems to show some understanding of syntax.
Both of these primate species use tools in the wild.
The point this communicates to me is that language use, at least in the higher/abstract level, seems to require a certain level of brain development to be fully implemented. Certain noises in the wild might be driven instinctual urge (that cultural groups of animals semi-standardize within a region), but higher level communication of abstract thought probably comes later in the development of animal brains.
I just wanted to say, that my thoughts are not linear like my sentences - they have much higher dimensionality and reduced to a sequences only for communication purpose. For me itâs the most direct prove that language is only reduced version of brain activity, limited by the physical characteristics of channel we use for communications.
See what I said earlier about motor sequences driving internal connections back to the sensory cortex.
These drives donât have to be grammatically correct speech, or even speech at all, but I am certain that the islands of semantic meaning and abstraction are certainly trained up by speech.
Thatâs how one word is represented in the brain (at one moment of a process of it understanding):
Itâs from this video https://youtu.be/z6-DLGdXtAQ which has a lot of other interesting details.
What I see here, is using many abstract properties which convoluted to compose a meaning of the word. I also see pre-language origins of those abstract properties.
Is language important for developing this system? Definitely yes, as well as new ways of interactions with the world around and new objects and actions in the environment.
Is the system based on the language? I just donât see how it could happen evolutionarily.
I side with you that âlanguageâ did not spring up as a complete thing from evolution. This is one of the reasons I question Chomsky.
That said - language engages and organizes structures is ways that donât seem happen without it.
There are many mportant âhumanâ abilities that do not emerge without language. Many of these are things that researchers have worked to build into neural net simulations. Note that even in the example you provided there are important groundings in the body systems related to the concepts. Language organizes these into a whole system.
I feel like we need some definitions to make this thread more useful.
I propose:
language = high level abstraction of communication that includes syntax/grammar/time/preposition
communication = intentional production of noise/action to convey a meaning (i.e. cat meowing to get attention)
A prerequisite of âlanguageâ is that there is either an emergent or manufactured structure to it, which is generally agreed upon as a group of creatures that use it. Dialects within a language might otherwise be considered ânoiseâ that doesnât wash out the agreed upon structure. ML research has shown examples of NNâs that can use reinforcement learning to develop their own communication structures for given tasks (such as negotiating / bargaining for prices).
Communication = simple flashing of messages, which relies upon shared common sense. For example, between species, puffing up of prey against a predator (elk vs. bear) implies that both creatures share a common sense about pain/injury/risk, as well as a common understanding about what avoids that outcome.
Instinctual noise making/flashing = muscular reflexes driven by hormonal brain firings within a species. Same members of that species may share the same common sense of associated feeling/stimulus that elicits the instinctual noise/flashing. These noises/flashes are likely not understandable outside of the species/group, as there is no overlap in the common sense.
It seems there are two layers at play here: Common sense and structure.
Having a more highly developed neocortex lends itself to an possible emergent structures, such as language. Our old lizard brain has a level of common sense that may overlap with other creatures/species. But overlaps in emergent structures are probably not common, even as observed within the same species (human speakers of the same language miscommunicate all the time, as weâve all experienced).
I dunno. Just some thoughts. Feel free to poke holes
edit:
Iâd also add that I think our tendency to anthropromorphize and empathize is also an emergent feature of the structure of our neocortex, but as witnessed by psychopathy, is certainly not guaranteed. Something to keep in mind if developing AI based on our brain structure.
(I have my BA in Chinese Language and Literature. I speak Mandarin as well as passable Spanish⊠I understand bits of other southern Chinese dialects, some French/Italian/German/Scandinavian dialects, etc⊠studied Arabic for a year. I have a hobby interest in linguistics in general)
So, language is not a kind of communication, right? :-/
Ok, thatâs one more attempt to explain my point of view on the topic.
Letâs say we are building an AGI. We should define what is a primary architecture, and what will be emergent features.
If we start with a language, itâs a dead end. Thatâs where LSTM is now - even providing state of the art results with all enhancements like attention and bidirectional approach, it doesnât (and wonât) contain the world model, so it canât support needed level of context and meaning.
Đn the other hand, if we start with something that supports a structure of the world, we always can reduce it to a linear representation to support a language. Thatâs why this level should be considered as a primary model and a language as an emergent one.
Language is my Python to your Assembly Language⊠itâs a higher level organization of communication with a syntactic structure (formalized or informally agreed upon by a group of mutual users). Without that lower-level component of simple impressions of input/output, we get into the dead end, as there is no world model encoded in it. Thereâs not âsenseâ in the language, much less a âcommon senseâ. At that point, itâs a high-level mathematical construct without an awareness of the lower-level operations or their origin. I think thatâs why LSTM is currently hitting into dead ends. On the other hand, we have some attempts to codify that world model, by hand, such as Cyc, that get to the point that theyâre just too fragile.
Where I think we need to aim is to find a way to enable a machine to self-encode world experience, while providing a structure for higher-level abstract learning. Maybe that would give us AGI.
Myself, I think that HTM provides a potentially promising way to accomplish that as we keep experimenting with distal connections taking input in from various senses, returning output to the world, and perhaps having a system come to the conclusion that it can make decisions that affect its current and future state.
Where tools play into that is an agent learning that it can extend itself through the use of other objects to manipulate the world (i.e. âI canât reach that fruit, but by holding this stick I can.â). For this purpose, tool-using agents donât need language, just an awareness/connection that a chain-reaction of events (I move this stick there) can lead to a desired outcome (fruit dropping from a tree).
Completely agree