Language vs tools

I have to agree. Language took advantage of the structure of our brains. Information wants to be free and all that.

4 Likes

What about using fire, clothes, and cooking? All this were hundreds of thousands of years before language.

Don’t get me wrong - I believe language is very important for the ideas development. As well as written language, printpress, cities, and internet. Also language definitely needed some structural changes of the brain. I only sceptical about its influence on basic principles of the brain organization.

The evidence when no language is learned on the effects on cognitive behavior are fairly clear and well documented.You don’t have to go back in time before the common use of language to see this.

Perhaps I worded this poorly before - but you don’t have to have all of the mental tricks we attribute to the human brain hardware to make and use tools; you can be very limited and use tools.

It seems that language adds a powerful layer of features on top of these basic abilities.

Perhaps we should agree on this point (even with some differences of understanding where is a correct balance :slight_smile:).

Beside this, what do you think about linear language construction vs highly nonlinear models of thoughts?

I suspect that all language is learned motor behavior.

So far the only real work product from the dad’s song group is the general agreement on this model:

  1. we are passively exposed to organized sounds. These are processed and learned in The auditory cortex.
  2. as we learn these sounds some are colored with emotional values. In the group We describe one of these sounds as the “sexy dad’s song.” In general - any sound may be recognized as special. For example - sounds related to feeding could be “more food” sounds.
  3. the creature learns to control it’s physical hardware to make sounds. The learning is directed to the somatosensory and motor cortex areas.
  4. at some point needs - like hunger or hormonal drives cause some sounds to be a desired sound and the creature tries to use the control of the sound production hardware to produce this learned sound. We point to dad’s song in our example.
  5. as indicated earlier- this could be any valuable sound like social cues or naming of desired objects.
  6. fragments of the self generated sound are recognized as rewarding and drive further efforts to produce more of the learned sound.
  7. eventually an entire song is produced and recalled when the related internal drive calls for it.
  8. in human speech we analyze and store sounds with an efficient layered parsing system that naturally supports segmentation and abstraction. This allows additional flexibility of expression in speech production.
  9. our coincidence detection circuits promote the pairing of objects or emotions or internal need/satisfaction states to sounds.

This conversation made me go off and read about Koko (a lowland gorilla) and Kanzi (a bonobo), two primates who seem to have at least a basic understanding of human vocabulary concepts, and perhaps a passing understanding of grammar (complex language sequences/utterances); Koko doesn’t seem to use grammar or any particular syntax, while Kanzi’s communication seems to show some understanding of syntax.

Both of these primate species use tools in the wild.

The point this communicates to me is that language use, at least in the higher/abstract level, seems to require a certain level of brain development to be fully implemented. Certain noises in the wild might be driven instinctual urge (that cultural groups of animals semi-standardize within a region), but higher level communication of abstract thought probably comes later in the development of animal brains.




I just wanted to say, that my thoughts are not linear like my sentences - they have much higher dimensionality and reduced to a sequences only for communication purpose. For me it’s the most direct prove that language is only reduced version of brain activity, limited by the physical characteristics of channel we use for communications.

See what I said earlier about motor sequences driving internal connections back to the sensory cortex.

These drives don’t have to be grammatically correct speech, or even speech at all, but I am certain that the islands of semantic meaning and abstraction are certainly trained up by speech.

That’s how one word is represented in the brain (at one moment of a process of it understanding):


It’s from this video https://youtu.be/z6-DLGdXtAQ which has a lot of other interesting details.

What I see here, is using many abstract properties which convoluted to compose a meaning of the word. I also see pre-language origins of those abstract properties.

Is language important for developing this system? Definitely yes, as well as new ways of interactions with the world around and new objects and actions in the environment.
Is the system based on the language? I just don’t see how it could happen evolutionarily.

2 Likes

I side with you that “language” did not spring up as a complete thing from evolution. This is one of the reasons I question Chomsky.

That said - language engages and organizes structures is ways that don’t seem happen without it.

There are many mportant “human” abilities that do not emerge without language. Many of these are things that researchers have worked to build into neural net simulations. Note that even in the example you provided there are important groundings in the body systems related to the concepts. Language organizes these into a whole system.

2 Likes

I feel like we need some definitions to make this thread more useful.

I propose:
language = high level abstraction of communication that includes syntax/grammar/time/preposition
communication = intentional production of noise/action to convey a meaning (i.e. cat meowing to get attention)

A prerequisite of “language” is that there is either an emergent or manufactured structure to it, which is generally agreed upon as a group of creatures that use it. Dialects within a language might otherwise be considered “noise” that doesn’t wash out the agreed upon structure. ML research has shown examples of NN’s that can use reinforcement learning to develop their own communication structures for given tasks (such as negotiating / bargaining for prices).

Communication = simple flashing of messages, which relies upon shared common sense. For example, between species, puffing up of prey against a predator (elk vs. bear) implies that both creatures share a common sense about pain/injury/risk, as well as a common understanding about what avoids that outcome.

Instinctual noise making/flashing = muscular reflexes driven by hormonal brain firings within a species. Same members of that species may share the same common sense of associated feeling/stimulus that elicits the instinctual noise/flashing. These noises/flashes are likely not understandable outside of the species/group, as there is no overlap in the common sense.

It seems there are two layers at play here: Common sense and structure.

Having a more highly developed neocortex lends itself to an possible emergent structures, such as language. Our old lizard brain has a level of common sense that may overlap with other creatures/species. But overlaps in emergent structures are probably not common, even as observed within the same species (human speakers of the same language miscommunicate all the time, as we’ve all experienced).

I dunno. Just some thoughts. Feel free to poke holes :slight_smile:

edit:
I’d also add that I think our tendency to anthropromorphize and empathize is also an emergent feature of the structure of our neocortex, but as witnessed by psychopathy, is certainly not guaranteed. Something to keep in mind if developing AI based on our brain structure.

(I have my BA in Chinese Language and Literature. I speak Mandarin as well as passable Spanish
 I understand bits of other southern Chinese dialects, some French/Italian/German/Scandinavian dialects, etc
 studied Arabic for a year. I have a hobby interest in linguistics in general)

2 Likes

So, language is not a kind of communication, right? :-/

Ok, that’s one more attempt to explain my point of view on the topic.

Let’s say we are building an AGI. We should define what is a primary architecture, and what will be emergent features.

If we start with a language, it’s a dead end. That’s where LSTM is now - even providing state of the art results with all enhancements like attention and bidirectional approach, it doesn’t (and won’t) contain the world model, so it can’t support needed level of context and meaning.

Оn the other hand, if we start with something that supports a structure of the world, we always can reduce it to a linear representation to support a language. That’s why this level should be considered as a primary model and a language as an emergent one.

1 Like

Language is my Python to your Assembly Language
 it’s a higher level organization of communication with a syntactic structure (formalized or informally agreed upon by a group of mutual users). Without that lower-level component of simple impressions of input/output, we get into the dead end, as there is no world model encoded in it. There’s not ‘sense’ in the language, much less a ‘common sense’. At that point, it’s a high-level mathematical construct without an awareness of the lower-level operations or their origin. I think that’s why LSTM is currently hitting into dead ends. On the other hand, we have some attempts to codify that world model, by hand, such as Cyc, that get to the point that they’re just too fragile.

Where I think we need to aim is to find a way to enable a machine to self-encode world experience, while providing a structure for higher-level abstract learning. Maybe that would give us AGI.

Myself, I think that HTM provides a potentially promising way to accomplish that as we keep experimenting with distal connections taking input in from various senses, returning output to the world, and perhaps having a system come to the conclusion that it can make decisions that affect its current and future state.

Where tools play into that is an agent learning that it can extend itself through the use of other objects to manipulate the world (i.e. “I can’t reach that fruit, but by holding this stick I can.”). For this purpose, tool-using agents don’t need language, just an awareness/connection that a chain-reaction of events (I move this stick there) can lead to a desired outcome (fruit dropping from a tree).

Completely agree