"How Your Brain Organizes Information" video

I am too cheap to spring for a GPT4 subscription.
I am stuck with GPT3 or the Bing thing.

I can envision a time in the very near future that a subscription to an AI service will be as common as streaming services are now.

2 Likes

@robf

I learned many things from your posts and lead me to reassess/update some of my previous beliefs but I’m still largely of the same opinion as before. But I admit I don’t grasp the theories that support your point of view, so I’ll just ask the question that’s been on my mind: Is there a risk of drawing too much conclusion from some, potentially correct, philosophical / math / information theories, etc. to describe an “imperfect” biological brain?

Are there research that support the ideas that the brain is doing some of the things you theorized? Like how it remembers as much past sentences as possible in its memory (i.e. language can only be fully understood from its whole corpus) in order to understand language, or some oscillatory property for community finding, etc.? I know you guys are still figuring things out but I’m only asking since I read some posts opining that it’s not biologically plausible or something to that effect. I think it was by BitKing in some other thread.

You stated that naming is a creative endeavor and can’t be complete. I agree as there’s a possibility that a continuum between some two very different things could exist. I’m curious if there’s a continuation to that philosophy, something deeper that builds on top of that, like how it extends beyond the naming things part? Interesting philosophy.

Going back to the brain, I think that as long as it can differentiate things in some satisfactory or pragmatic manner then all is well. Like we don’t really need to describe 50 shades of grey in order to catch food or to not get killed in the wild. Of course, the brain and its host are screwed if two completely different things with looks that it can’t differentiate exist in such a way that one is harmful and the other is not. Survival of the fittest is a major determinant of brain design.

1 Like

Yeah, that can be very “near”. Actually it’s my boss who urged me to start using ChatGPT, and my company managed to pay the subscription for my account. Then it did help a lot in aiding me to start with a correct LIF model w.r.t. the simulation code I wrote recently.

2 Likes

Wow. It’s really holding on to that fame thing isn’t it. That’s impressive in a way. It’s got hold of a meaning thread, and is maintaining it. Even if it’s cracked! Like a consistent lunatic!

To be fair I’ve met humans who do as much. Though maybe not as bad as aligning the Dao-Yi-Ching with the pursuit of fame!

Well, some of them pretty bad, actually. People do have some cracked ideas.

Interesting also that GPT seems to be bending the whole “fame” thing to fit with genuine themes of “having a good reputation”, being honest… That kind of definition drift would fit about 99% of online debates!

Wow, that’s pretty good too. It’s pulling up an interpretation which is about as good as many academics might. It’s surely just flipped to other writings it’s found on the Internet, but that might not be a bad summary of that (correct!) context. I think by characterizing naming as a limitation it is missing the power of naming as a creative process. But several of the translations @complyue gave earlier also seemed to have this “limitation” interpretation.

That’s just a general summation of the Dao, and not that phrase in particular.

But interesting that it did manage the flip when presented with an absolutely iron clad interpretation.

What about those quotes from Chinese philosophers talking about the importance of fame though? Now you’ve revealed your hand it may not go back and invent people talking about fame to back that up.

Maybe you could prompt it something like: “taking the original ‘fame’ interpretation, what are some examples of philosophers talking about this”.

I was looking forward to those!

ChatGPT helped with the code? Impressive.

2 Likes

Fame is a somewhat misleading interpretation; “Face” is huge in Asian cultures.
I can see the path from what is is saying to the connection to respect and duty.

“The I Ching also suggests that having a good reputation can help one to build strong relationships, gain the trust of others, and achieve success in various endeavors. However, it also warns against becoming too attached to one’s reputation or seeking fame for its own sake, as this can lead to arrogance, vanity, and ultimately downfall.”
My fluency with the Chinese language is rusty, as I have not lived there for many years. I spent a total of 6 months in various Chinese speaking cities and was fairly good at one time.

I have noticed that when reading some of the older texts, there was great poetry and many word games in the classic texts. I can see how the naming and fame/face and relationships are a thread that runs though the text and would not be at all surprised if this double meaning was intentional. ChatGPT, summarizing the entirety of the document, would be likely to pick up on this if it is so.

By the way, “Dream of the red chamber” (Story of the stone?) and “Raiders of the marsh” are great stories!
It was totally worth plowing through them in Chinese when I was learning the language.

2 Likes

Not quite sure what you mean here.

Are you saying I’m suggesting the brain is “imperfect”?

As to whether there is a risk of drawing “wrong” conclusions from any theory, well that’s always the case. If it wasn’t true there wouldn’t still be so many unsolved problems in the world. We would all have perfect theory and perfect conclusions.

Arguments about what is physically plausible are always dangerous. Typically you change one or two assumptions, and the whole argument collapses.

But evidence that the brain stores detail (as distinct from recalling detail, which usually it doesn’t, usually it recalls gist…)

I’ve thought for a long time that evidence from some autistic savant behaviour is indicative that the underlying mechanism is example based.

That would posit that autism is a problem with the ordering mechanism. It would say that most people are able to order the detail of the world to make “sense” of it. The failure of autism then would be a failure make “sense” of detail (by throwing away much of it.) But exactly because they were not ordering it to make “sense” they might continue to have access to the detail.

Examples include:

Stephen Wiltshire draws Rome from memory

There’s also things like HSAM (Wikipedia: “Hyperthymesia, also known as hyperthymestic syndrome or highly superior autobiographical memory”). Which include why not being able to throw away what is not useful is actually a problem:

“Whenever I see a date flash on the television (or anywhere else for that matter) I automatically go back to that day and remember where I was, what I was doing, what day it fell on and on and on and on and on,” she had written in an email to McGaugh. “It is non-stop, uncontrollable, and totally exhausting.”

For language, Pawley and Syder are a standard early reference:

Two puzzles for linguistic theory: nativelike selection and nativelike
fluency
Andrew Pawley and Frances Hodgetts Syder

Is that the kind of evidence you mean?

Well, this gets political (making “meaning” political becomes the point of it…) but arguably the most important direction this has taken in philosophy, in the West, anyway, is that it has led the modern humanities to reject objective truth, and assert that all “truth” is political! And so we must censor “truth” according to the relative political power of “identities”! You should not be allowed to just say anything you think at all. And that gives us the intellectual basis driving the political, and academic landscape of social division we face today. Perhaps you recognize it.

You can trace that earlier to Nietzsche, who I understand argued that loss of belief in central truth (equated with God, historically) would lead to totalitarian belief systems in the 20th century. Which was brutally prescient. Fascism and Marxism were associated with asserting “truth” to suit themselves (and both coming out of the most intellectually “sophisticated” societies.)

I’m struggling to think of a good continuation in philosophy!

Wittgenstein decided all philosophical disputes had been due to confusion about language. So he basically rejected all philosophy before himself. That was relatively benign!

I would say that lacking an ability to get beyond our own biology, philosophy got stuck. And the solution is to work upwards from the biology. Only by working upwards from the biology can understand both how the meaning we create is subjective, but also how it can be rooted in objectivity (as a process), which might finally enable us to start finding a basis for agreement as societies again (rather than endless division in “identity”: the wrong interpretation of “name”, as a division, instead of as a creative process)!

No, no. It’s a good thing!

It’s easy to think of subjective meaning as a flaw. But it’s not. It’s a power. As I keep saying, if it didn’t exist then we might imagine some moment with everything was already known. Then creativity would be dead.

This gives us an infinite power of creativity.

It actually allows something to be larger than itself. Just think, two interpretation for one thing… Then you interpret those two things in two ways, and you have four things, and 8 and 16…

If it were not possible for something to be “larger than itself” in this way, would intelligence be possible at all. Could our little brains encompass the universe? Could we be “bigger than ourselves”, so we could be aware of ourselves, and conscious?

Or even think of those autistics, constantly buried under an avalanche of detail.

Walter Freeman the neurobiologist seems to have been on to it. He said of chaos in the brain:

‘The poet William Blake wrote: “If the doors of perception were cleansed every thing would appear to man as it is, infinite.” Such cleansing would not be desirable. Without the protection of the doors of perception-that is, without the self-controlled chaotic activity of the cortex, from which perceptions spring-people and animals would be overwhelmed by eternity.’

http://projects.mindtel.com/vader/users/rikr/GrokBox/rawmater/Physio_Percept.html

No, it’s a good thing, not a flaw.

Did that cover your questions? I can probably expand on most of those themes.

1 Like

Can’t agree with you Mark.

Yes, you can argue a justification. It’s possible to argue anything. A cloud looks a bit like a rabbit. Whether the true path of justification was because of a surfeit of articles about Chinese consumers loving brand name handbags (at best! More likely just the word 有名 in isolation), or whether ChatGPT really did make a subtle connection between ideas of “face” and “fame”, I know which side I come down on.

The original error wasn’t any kind of sophisticated reasoning AI anyway. It was Google Translate, wasn’t it? Out of context, a totally uneducated person might have translated it the same way. It’s only in the context of knowing the phrase that you know what it’s talking about.

We can’t allow ourselves to rationalize every hallucination produced by these things as some subtle meaning which we need to appreciate. We’ll start rationalizing self-driving cars killing pedestrians as failures of the pedestrian.

Are you now going to tell me that if Google translates “To be or not to be” as “Should I do a Bachelor of Englneering” that is really some subtle connection between the problem of existence and a well engineered bridge? …Well, if a bridge isn’t well made, should it exist?? Hmm.

Sure, there’s a thread. It’s not a human thread. A human can have new ideas. This is plugging in the wrong words out of context.

The Dao-Yi-Ching was not a diatribe talking about the importance of becoming famous!

Gimme a break.

And in that sentence, it’s not even talking about modesty.

Whatever.

As a strong connectionist, I followed the early translation work at Google and was surprised (as many were) that the network was building an internal dictionary based on word connections to other words.

I see the current transformers as the same basic idea on steroids: word sequences, word clusters, and words that stand for word clusters. Statistical grouping of words.

The connectionist in me sees this as the cooked-down connections converted into graphs and connection weights, meaning micro-parsed between different clusters of connections. In many ways, the construction of the structure of a language is much like the structure of science itself: strong connections for some basic facts in the center of the structure, with more and more connections piled on top of these basic connections. At the outskirts, half-truths, conjectures, and pure fiction. Poetry and imagination live there.

The grounding I pointed to earlier would be the strong connections based on personal experience, and the grammar learned as part of picking up a language. One big spaghetti ball of connections piled on top of that.

You are working to pick apart the operating system to the resonant short and long-term dynamic structure, as have Grossberg, Buzaki, and many others. You have a long way to go to make a valid case for chaos as part of the structure of language. I am watching but am not convinced.

Despite your repeated disparagement of LLM technology, it is just a newborn baby. It makes the same mistakes a child makes. It works on the construction of speech based on the connections I mentioned above, and many of the mistakes are totally understandable based on the inconsistencies of the corpus of text used for training. Humans are basically prone to biases and folk science and folk psychology - I’m not at all surprised that the early versions have this shine through the answers. It will get better.

As has been pointed out, the prestige culture in China permeates much of the available texts in electronic form. The classic Confucius (also known as Kong Qiu or Kongzi, or 孔子) and other classic texts are a few books against a sea of more modern text. The more modern interpretation is bound to outweigh the classic meaning of the word in the statistical weighting of the words.

You do realize that as all languages are used, the meaning of words can drift, and old meanings are lost? Reading ancient texts can be difficult for reasons like this. In the religious sphere, a missed translation of the meaning between “a young maiden” and “virgin” has sparked some interesting mythology!

You can sit around and throw out names and random mutterings of various philosophers (just some guys to me) as if it makes any difference if it makes you feel better, but it won’t really make any difference. The LLM tech will get better; it will end up taking over more human “thought-based” jobs, and humans will be reduced to pet status. People will still struggle to maintain a pecking order and whatever power and status over other people as they have always done; it’s buried in our instincts, and we can’t change it. The AI tools will be available to them to do this (See China for the leading edge of this in play now), and nobody will give a tinker’s damn about what you or I think about it.

1 Like

I guess from autism I could say that our brain has the hardware capacity, if rewired correctly, to store vast amounts of episodic details. But that’s a very low level and inefficient learning algorithm. Autistic people often also has something like echolalia that probably suggests an inability to turn episodic memory into semantic ones or abstract meaning from examples: 3 Things You Should Know About Echolalia

According to the link above

"Typically developing children tend to begin learning language by first understanding and using single words, and then they gradually string them together to make phrases and sentences.

Children on the autism spectrum often follow a different route. Their first attempts at language may be longer “chunks” of language (phrases or sentences), which they are not able to break down into smaller parts. These chunks are more grammatically complicated than they could put together themselves, and they don’t understand what the individual words mean.

Yes I recognize it. Most vividly in AAA games. They seem to be very non-inclusive towards the top 10% most handsome/beautiful members of society hahaha. But yeah it’s an effective tool for politicians to create multiple truths and divide the population when there is only one truth or a fuzzy truth that could described by a single doctrine as its centroid.

I was trying to say that all the shades of grey in the continuum between white and black need not be taken into account by the brain. We make and learn to recognize as many shades of grey as is needed. We can learn, relearn, forget, learn 100 shades of grey or only a few as determined by our needs. But at any one time it is all that we need (assuming one is already a seasoned/learned grey watchers) without worrying about drift of meanings.

If the people and politicians go full socialist(?) tilt now the people can probably make something like Communism happen. In other words communism might be more viable now with current technologies. With blockchain DAOs and nationalizing AI servers such that each people has a right to control those servers up to a certain degree the people might still be on top of politicians and preventing one of the flaws of communism where power is centralized to only a small group of people. If anyone thinks I’m wrong don’t get triggered and pls be gentle as I’m not into politics.

2 Likes

@roboto I’m confused by your analysis of autism.

You don’t seem to be disputing my key point that it might be evidence the brain can store detail.

But you still want to think the brain stores only abstractions. So you defend the idea the brain stores abstractions, by positing two mechanisms of storage now. So you can keep your abstract one. Because you really want it. As well as the detail one. Which is the one for which there is evidence…

It seems simpler to me to analyse the evidence as a single mechanism for storage, and autism as a failure of process over that underlying storage.

The slam dunk evidence would be evidence the detail can be reordered, after storage. Something like an “ah-ha” phenomenon…? (as if we’d like to have a model that can explain that… Much better to keep abstractions, and leave the “ah-ha” phenomenon unexplained?)

I’m reminded here of some work by a professor in Australia some years ago, where he advocated a kind of “thinking cap” where he disordered the brain using magnetic fields, and claimed it helped people break out of rigid patterns of thinking. That might be evidence that re-ordering can happen after storage:

Snyder: “My hypothesis is that savants have privileged access to lower level, less-processed information, before it is packaged into holistic concepts and meaningful labels. Due to a failure in top-down inhibition, they can tap into information that exists in all of our brains, but is normally beyond conscious awareness.”

And that would explain why we see such detail productively, in evidence like Pawley and Syder’s for language. For which the mystery of an abstraction which can capture both the observed generativity and the observed detail, has never been solved. (Because of contradictions, in the Chomsky history I gave.)

Interesting evidence, again, that autistics are storing detail.

Couldn’t it be that they are both storing detail. It is just that autistic children struggle to abstract that detail, productively, when they speak. (Which would explain the contradictions of abstraction, which prevent explanations for Pawley and Syder’s “Nativelike Fluency and Selection”?)

Isn’t that what I’m saying? Aren’t I just suggesting a mechanism for that?

1 Like

@Bitking You seem to take this side discussion of philosophy very personally. As well as my criticisms of large language models.

I do think philosophy is important as a general framework for thought. I think the movement to subjective thought in Western traditions, and threads in Eastern traditions, is suggestive of issues for technology, as well as elsewhere.

But in the context of technology it is only suggestive. I present it as something to consider when trying to understand our problems with technology. In particular people were struggling with the idea of contradictory thought. (Which @bkaz crystallized momentarily as…

Momentarily. And then everyone went back to their old habits of being “sure” we can find global simplifications again… (I’m not saying the process of re-organizing detail of example into new thought is easy! Only that it is possible! And ultimately necessary to explain new thought when it does arise. Be that ever so rarely!) All the Westerners went back to them, anyway. Interesting dichotomy there.

The philosophy is mostly to provide context, and a sense of how meaning might contradict. For those who are struggling with the idea of subjective truth.

But the technology argument should stand on its own.

As a criticism of technology, I find the idea that I am excessively critical of language models amusing. For the longest time it was me who had to argue for the significance of language models! I was the guy arguing that language models could tell us things about meaning and cognition generally. As I said earlier, I failed to make any headway in this forum… 8(?) years ago, because Jeff Hawkins was convinced that language modeling was not significant to cognition! The moderator didn’t want talk of language models in this forum.

And now I’m the guy denying the significance of language models for cognition!

Eight years elapse. Google trains some language models, and someone stumbles on “attention” (to detail of example…) as important, and suddenly all the status quo guys are the original language model OGs!

Language models are all I’ve ever been working on! I was arguing for distributed meaning representations using text embeddings when AI as a whole was obsessed with Bayesian statistics and expert systems, or cognitive networks like OpenCog (still there), or neural schemas and embodiment like Lakoff. Or Jeff thought that motor feedback might solve the object recognition problem. Now, because part of what I was pushing for has come to pass, by virtue of a little “attention”. To what? To the detail of example in the sequence learning of language models. Because of those 8 years of accidents, suddenly I’m the guy who doesn’t appreciate language models?

“Attention” was a jump step. As was LSTM before that. And when hardware (designed with games in mind, not AI, an accident…) suddenly made practical all those 20 year old algorithms of Hinton’s for connectionism. All jump steps which moved closer to my position.

And now, I’m discussing a possible jump step for language models to come, and that’s me being dismissive of language models, somehow!

Is any suggestion for progress to be taken only as a criticism of the status quo? The future is coming, somehow, of itself, and by criticizing the status quo I’m hiding in philosophy, and just “throw out names and random mutterings”?

I’d rather be discussing how to move forward. But there’s no comment on my technical thread again.

I’m talking about how to make it better.

What’s your technical argument? What did Grossberg and Buzaki have to say about “resonant short and long-term dynamic structure”?

What evidence do you have for that? What evidence do you have that there are fixed “basic facts in the center of the structure”?

I think we don’t need to store everything into memory if it can be compressed/abstracted. Autistics aren’t good with that.

I saw a TV show presenting multiple research that suggests that the memory of a normal person is fluid and fragile. Easily falsified/contaminated/altered and forgotten. In that show there was an experiment done by a UBC professor to test whether it was possible to fabricate autobiographical memories of the past into someone else. She succeeded in implanting made up memories in a significant portion of the participants (maybe 30-40%). She first lied to the participants that their close friends/relatives was interviewed and told her about the fake event that happened in the past. I don’t remember the whole process but the prof was asking them questions about the fake events and slowly releasing fake details at the same time. Many of the participants voluntarily slowly pieced together a whole fake event in the process. They were very convinced about the fake event and then provided more details about their implanted memories to the prof that their brain further made up.

I see. I guess we’re trying to describe the same phenomenon but with different methods in mind.

My debate is nothing personal as I’m just trying to learn. I think your experiment is interesting even if in the off-chance it’s not correct many things could be learned from it.

1 Like

Well that’s good evidence that recall is constructed, isn’t it?

If events were stored as abstractions, wouldn’t that make them harder to change?

Current NN’s are notoriously hard to change. The whole thing needs to be retrained from scratch at the addition of every new bit of information. Figuring out how to change them incrementally is another big research conundrum.

For myself, because I see meaning as a constant process, I like to distinguish “recall” from “memory”. The exact words don’t matter. But I like to emphasize that what we can become consciously aware (recall) of might be different to the storage underlying that awareness (memory.)

But, call it recall or memory as you like, if subjects were susceptible to having their recollections gradually altered by new details supplied by the experimenters, that to me says that the recall process is ongoing, and susceptible to both the constant addition of new elements and new organizations of existing elements.

By the way, your mention of single word utterances as the basis of normal speech development, in contrast to unanalysed chunks more typical of autistic development, motivated me to look up some early childhood learning data. Am just reading this interesting discussion:

'Unanalyzed imitative routines are one form of gestalt processing found in normal children’s language. …

The use of memorized, unanalyzed segments has also been studied for normal subjects. Peters (1977) discussed a gestalt style of language use wherein some early language learners memorize particular multiword phrases that are heard often in specific contexts. She suggested that such phrases may actually be perceived as single units and may subsequently be used somewhat appropriately in similar situational contexts, giving the appearance that the linguistic system is of greater complexity than it actually is. Peters suggested that children who demonstrate such gestalt language may initially have difficulty using pauses and other prosodic cues to segment utterances. As a result, they pro-duce whole utterances rather than one or two words.

Peters’ observations are supported by Clark, who indicated that her sons used copied utterances that “were retained in tact for several weeks,” such as “wait for it to cool,” which was said when a hot meal was brought to the table (1974, p. 4), or “Don’t touch that, it’s hot,” which was said as one child pointed to hot tea (1980, p. 10). These utterances were far more grammatically sophisticated than the children’s actual level of linguistic competence; thus Clark referred to the production of such patterns as “performing with-out competence”’

I also recall evidence that children’s production of English past tense (one of the early successes of connectionism) is famous for initially producing “irregular” forms correctly, then over regularizing them, before going back to idiosyncratic irregular forms again. Suggestive to me that initially these are stored as unanalysed chunks, which then get over regularized in a productive abstraction process, before the abstraction process is constrained again. This paper seems to provide a nice discussion of that (and more?):

Children’s Acquisition of the English Past-Tense: Evidence for a Single-Route Account From Novel Verb Production Data

Ryan P. Blything, Ben Ambridge, Elena V.M. Lieven

I also don’t know if it’s worth mentioning again in this context, the analysis I referred to before by Peter Howarth, looking at the normal progression of competency in second language learners, maybe different from children, or maybe related. Howarth found that learners tended to produce what he called “overlaps”, or mixes by analogy between detail sequences. Only later did they start to produce more of what he called “blends”, which were generalizations based on what might be called more abstract generalizations:

I’d be interested to hear your suggested methods.

2 Likes

Hmm, loved pets will be given excellent care for reason that just being adorable, unloved pets shall go stray. Pity for all.

I’m doing ES (Evolution Strategy, a branch of Genetic Programming) lately these years, made me think that our world/universe may actually be an optimization process per see, epoch by epoch, wherein mini-batch by mini-batch. Carried out by some higher intelligence we’re not given access to be aware. And human DNA may be expected to solve certain problems in the higher world. So competency has to be the “fitness” measurement up there, and various drives are derived from that goal, to keep us busy everyday.

explain “天地不仁,以万物为刍狗" from 《道德经》

ChatGPT-3.5:

The phrase “天地不仁,以万物为刍狗” is a famous line from Chapter 5 of the Dao De Jing (道德经), an ancient Chinese text that is attributed to the philosopher Laozi (老子).

This phrase can be translated in a few different ways, but a common English rendering is: “Heaven and Earth are not humane, and regard all things as straw dogs.”

The first part of the phrase, “天地不仁” (tiān dì bù rén), means “Heaven and Earth are not humane” or “The universe is indifferent.” This suggests that the natural world does not operate according to human notions of morality or justice.

The second part of the phrase, “以万物为刍狗” (yǐ wàn wù wéi chú gǒu), can be translated as “and regard all things as straw dogs.” In ancient China, straw dogs were used as ritual objects in religious ceremonies. They were treated with great reverence and care during the ceremony, but afterward they were thrown away and treated as ordinary trash. This phrase thus suggests that the natural world treats all things with the same indifference and lack of attachment that humans might treat a ritual object after the ceremony is over.

Taken together, this phrase suggests that the natural world operates according to its own principles, which may be very different from human notions of morality and justice. It also suggests that all things in the world, no matter how important or valuable they may seem to humans, are ultimately treated with the same indifference by the universe.

1 Like

So long as the majority of people are vulnerable to be controlled by sentiment, ideology manipulation is always an effective technology, though even softer than software installed in computers. Chinese governments (both PRC and RoC) are more advanced in this technology after continuous development of thousands of years.

I suppose today’s PRC has asserted that communism/socialism can not be a real success, but there are similar alternatives for certain.

And I think language based reasoning processes are allowing us to try multiple alternatives at the same time, with the opportunity to choose wisely somehow. While electric computers can only try one at a time, with most computation power wasted on less significant parts of the whole possibility space.

2 Likes

You could read them to find the answer. It is the center of both researchers’ work.
Grossberg models such as ART and various further incarnations are about “Adaptive Resonance theory.” It’s right there in the name.

Buzakis’ lab has “A search for a neural syntax” emblazoned over the entry portal. His book, Rhythms of the Brain, offers the theme of oscillatory timing as the brain’s fundamental organizer of neuronal information. He touches on the idea that spontaneous neuron activity, far from being mere noise, is the source of our cognitive abilities. His publications are numerous and delve into many aspects of oscillation and brain function.

I have gotten more useful information from reading these researchers than from any of the classic philosophy texts. I have wasted too many hours reading circular arguments and counterarguments, all arrived at by intense navel-gazing. Pick your point of view and there is someone that has written it for the quote; most have as much basis in fact as the assertion that men have more teeth than women (thank you Aristotle) or similar claptrap.

As far as your repeated snarky slaps at LLMs, yes, LLMs have problems. You can go back through your posts and find many statements that start with the shared assumptions that LLMs are flawed and then push forward from that point."

LLMs only model the cortex portion of the brain and the “attention” mechanism is groping toward the way that the cortex searches for content. I expect that adding in the contributions of the subcortex will produce further step changes in function. Adding training sets that include “lived life lessons” will introduce elements of common sense. As I said, early days for LLMs.

As the old Russian proverb goes: "The marvel is not that the bear dances well, but that the bear dances at all.

1 Like
  1. See this post and the referenced paper.
  2. See the grounding in the sensory portions of the brain. (basic facts)
  3. Follow the discussions in the paper regarding development of subsequent semantic information.
  4. Draw said conclusion.

As to the The Epistemology of Science, you yourself, being a student of western philosophy as you profess to be, are certainly aware of the threads regarding the roles of falsifiable and prediction from the corpus of scientific knowledge. I should not have to explain the implications of this to you.

1 Like

Nice that Buzaki is addressing oscillations as fundamental and not just noise. But the question of interest is HOW are they fundamental?

Buzaki appears to be positing some kind of fixed hierarchy, which is activated by simultaneous (synchronous) spikes by synapses from below. So the “syntax” is a fixed (or slowly changing synaptic weight?) hierarchy.

Grossberg according to Wikipedia on Adaptive Resonance Theory “postulates that ‘top-down’ expectations take the form of a memory template or prototype that is then compared with the actual features of an object as detected by the senses.”

“Top down prototypes or templates” are certainly not a fit for contradictory or chaotic structure.

It’s unclear to me what you’re saying by citing them. Just the fact that other people have posited oscillations in the brain have significance beyond noise?

Here’s are the comments I made on that paper the first time you posted it to me:

I may not have commented on evidence for abstraction, because I didn’t believe the paper offered any. And I still don’t. The paper is describing how aspects of cognition will be anchored in embodiment (of which the whole point is that it can’t be abstracted, that’s what embodiment MEANS. Embodiment MEANS it can’t be abstracted. I would have thought that was obvious, and was probably too polite to mention it.)

The specific reference to language in the paper (and particularly another one referenced within that) was quite a nice attempt to guess what a syntax (combinatorial) mechanism might be. Which aligned very closely with mine. Except for the assumption that grammer could be abstracted. They don’t address that. The certainly don’t present evidence for it. They assume it. I show in my reply to you at that time that this abstraction is not necessary, and I suggest the opposite. With quite a lot of evidence, including Tom Mitchell’s nice presentation of the complete mixing of combinatorial representations in fMRI evidence. Not addressed in any reply.

I now notice when introducing the paper that first time you referenced Chomsky’s idea of “built-in” language features.

Is this also your “evidence”? A theory. Broadly disputed, and failing this last 70 years.

This is not evidence. It’s a theory. I’ve presented alternative interpretations for Chomsky’s own “evidence” for this theory elsewhere. (Basically what Chomsky takes to be evidence for “built-in” language features, is exactly actual evidence, that semantic abstractions for language cannot be learned! So his evidence is more evidence for a rejection of learning, than it is evidence for any actual abstractions. Which 70 years of searching failed to find.)

Not only are Chomsky “built-in” language features not “certainly” built around an embodied grounding. They are not built around anything at all. They almost certainly don’t exist. As I’ve argued repeatedly.

But OK. Good to see you don’t have any evidence for abstraction in semantic categories. Not worth labouring the point.

Is this sort of personal inuendo the standard of moderation you seek to bring to this platform?

You seem to be bringing some kind of personal animus to this debate. It makes discussion tedious because objections become entrenched, and actually empty of content.

You are correct, this is not what I should be doing as a moderator.
I look forward to functional models demonstrating your concepts.
Until then, I am out of this thread.

1 Like

It’s more of a school of thought that I subscribe to than an actual model. Like how LLMs are getting somethings right and that major advancements in it could possibly lead towards AGI.

That Peter Howarth paper is quite interesting. But I believe those degrees of closeness (i.e. blending vs overlaps) seems modellable by graph neural networks. Like how far one node or word is from other words or more specifically the concept of message passing that’s carried out multiple times for each nodes/words.

I think we each have a set of beliefs that we can use to convince ourselves that those phenomenon are explainable by them. Though to be frank Bitking’s beliefs are more rooted in experimental evidence whereas I still can’t comprehend the core idea that language can’t be abstracted more than its corpus.

I guess communism is a strong word that carries many negative connotations. Socialism is more like it. With AGI it is the only way forward I think to give people power over those AI servers and nationalizing them once AGI is achieved. As long as power isn’t concentrated in the hands of the few and that the formation of political dynasties could be inhibited then takes care of some major headaches. One way to inhibit sentiment-based manipulations would be to introduce a form of national service requirements where one must undergo certain classes in order to vote on certain things. That introduces the fear that the curriculum itself could be manipulated among other things. But it is expected that many things need to be sorted out in the name of progress. I mean technologies improve over time so should politics especially with the emergence of AGI and blockchain tech. It’s a ripe time to start thinking on how to disrupt the political scene. The most prominent symbol of democracy is run by a duopoly and yet they despise and aggressively go after monopolies.

2 Likes