Edelman’s Neural Darwinism came up in this thread because @DanML responded to my arguments - that a failure to recognize contradictory, even chaotic, structure is what has been holding us back in AI - by mentioning similar themes in György Buzsáki’s work:
This may a digression too far, but one point I thought György Buzsáki was trying to make was the deflation of an ‘object model’ in human vision and thought processing. The fact that ML is approached by programmers generally, I think makes this problem worse (maybe due to earlier UML and OOP influences). His view appeared closer to Batesons “the difference that makes a difference” - which is just splitting the data/world anyway that works. This chimes with your view of living with the contrad…
I looked up György Buzsáki and said his work reminded me of Gerald Edelman’s Neural Darwinism.
@Bitking then recommended Calvin’s book on another variant of Neural Darwinism, with many of the same themes of expanding, growing, changing, structure, even chaos, that had been the basis of things he had been working on for HTM.

Which Edelman book are you recommending? I looked back in this thread and didn’t see a specific book.
Here’s the post in this thread where @Bitking recommended the book @DrMittlemilk is asking about. It wasn’t initially a recommendation of a book by Edelman, but Calvin:
Rob, I have been posting about this on the forum for years. This may have all been while you “were away” but I think you might be very interested in some of what I have been putting down. First things first - you really REALLY have to read this online book. Darwinism plays a predominant role in computation this book, and chaos theory is mentioned. Don’t cheat, read the whole thing: Based on the proposals there, I added lateral connections to HTM to implement Calvin tiles well before Numenta…
So basically Neural Darwinism came up in this thread because I am arguing that contradictory structure may be what we are missing in our attempts to code “meaning” in cognition. And that lead to some links to similar ideas, notably growing or contradictory, even chaotic, representation, in Neural Darwinism.
I agree with the themes of growing structure, and especially chaos, in Neural Darwinism. But I disagree with the specific mechanism it proposes for finding new structure.
I’m arguing in this thread that we don’t need to have the random variation followed by evolutionary selection of Neural Darwinism. Rather I think existing “learning” methods, specifically the cause and effect generalization of transformers and Large Language Models, is already telling us the structuring mechanism we need. The only thing which is holding us, and transformers, back, is that we still assume the structure to be found is static and can be learned.
I’m arguing that the blow out in the number of parameters “learned” by transformers is actually indicating that they are generating, growing, contradictory, and even chaotic, structure already. So the only mistake is that we are trying to “learn” all of it at once. Rather we should be focusing on its chaotic generator character, and generating the structure which is appropriate to each new prompt/context, at run time.
And I’m suggesting we can do that, by seeking cause-effect prediction maximizing structure in “prompt” specific resonances in a network of sequences, initially language sequences. This in contrast to transformers, which look for these cause-effect prediction maximizing structures, by trying to follow prediction entropy gradients to static minima, and then only selecting between potentially contradictory structures learned, using a prompt, at run time.