Jeff’s recent book motivated me to better understand the evolution of the neocortex. I came across the series Progress in Brain Research with a chapter The origin and evolution of neocortex: From early mammals to modern humans published in 2019 by Jon H.Kaas, a Professor of Psychology who was an assistant professor in neurophysiology and has “studied the brain for some 40 years”. I will present a few quotes and raise some concerns regarding the TBT hypothesis.
“Comparative studies of the subdivisions of neocortex of mammals indicate the number of cortical areas varies greatly, but a small number of cortical areas, about 20, are consistently found across members of the three major branches of mammalian evolution”
“Early mammals were small and largely nocturnal.” which means vision was not as highly developed as other senses while dinosaurs were around about 65 million years ago.
“it appears that the overall pattern of the variation in neuron packing densities was more even across cortex in early primates and not that much different from other mammals, but increased as anthropoid primates emerged, and increased further with the evolution of apes and humans. This variation is important because it is part of the specializations of cortical areas for different functions.”
“Motor cortex lacks an obvious layer 4 of small neurons, and is specialized for summing information by having large pyramidal neurons with large dendritic arbors. In contrast the small layer 4 neurons of granular prefrontal cortex are ideal for preserving information. We see the starts of these specializations in the neocortex of small strepsirrhine primates, and these beginnings of neuron specializations are greatly enhanced in monkeys, apes and humans.”
“chimps and humans shared the common ancestors 6–8 million years ago”
“Language is a unique human accomplishment that is completely dependent on new features of the human brain. First, the neural mechanisms that mediate language are highly lateralized to the left cerebral hemisphere. The major advantage of such an arrangement is that it avoids the need for massive connections between the two hemispheres, which would be costly in conduction time, energy, and bulk”
“Language appears to depend on sub-networks that were derived from cortical networks for object recognition and action that emerged in early primates, the so-called ventral and dorsal streams of processing for vision that have been joined by auditory and somatosensory components.”
Jeff has claimed that the rapid expansion of the neocortex justifies the idea of a common cortical algorithm that is reproduced. From TBT p.26 “the major expansion of the modern human neocortex relative to our hominid ancestors occurred rapidly in evolutionary time, just a few million years. This is probably not enough time for multiple new complex capabilities to be discovered by evolution, but it is plenty of time for evolution to make more copies of the same thing.”
Millions of years seems a long time for evolution - it can get from chimps to modern humans. The specialization of the language area is a concrete example of just how fast evolution can change brain structure when driven by functional adaptation.
There are over 200 specialized regions in the human neocortex and at least 20 of those have been evolving for over 65 million years. If there was a common algorithm that leads to increasing intelligence by replication, then it is hard to see why there is a rapid increase in brain size of the human evolutionary line in only the last few million years. This seems to be a strong case for there being different cortical “algorithms”. Large parts of our neocortex are still using similar structures to mammals from 65 million years ago. I think the concept of “functional shift” makes a lot of sense in fitting the puzzle together. For example, a functional shift in visual processing gave rise to new functions of language. Support for this would be structural differences in the different regions - which are observed.
TBT brings other arguments to bear for a common algorithm:
“If I showed you two silicon chips with nearly identical circuit designs, it would be safe to assume that they performed nearly identical functions.” This surprised me coming from a an engineer - nearly identical circuits lead to radically different functions in digital circuits. The combinatorial logic that composes the general function of logic circuits looks very similar unless seen under a very powerful microscope examining each connection. We could draw an analogy with the similarity of connectivity in the neocortex. But the algorithms being implemented by the combinatorial logic in a USB interface is nothing like the function of an interrupt controller. This is an example where the smallest details absolutely matter. At the details of neurons there are not even two identical neurons (at least transistors are similar in digital circuits).
Another argument from TBT is “the function of neocortical regions is not set in stone. For example, in people with congenital blindness, the visual areas of the neocortex do not get useful information from the eyes. These areas may then assume new roles related to hearing or touch.”
This is an interesting point. Given that vision is using about 30% of the neocortex and that we grow up in an environment that is intensely geared toward achieving academic success - then we should literally see a majority of geniuses are blind. It is not that being blind does not allow for genius. We know that blind people have enhanced touch and hearing. This argues for a neocortex that has a particular algorithm for sensing and something very different going on when it comes to other human traits like language ability.
The final argument from TBT supporting the common algorithm: “Finally, there is the argument of extreme flexibility. Humans can do many things for which there was no evolutionary pressure. For example, our brains did not evolve to program computers or make ice cream—both are recent inventions.”
Many other animals can also demonstrate extreme flexibility. It is more impressive to see a dolphin learn to jump through hoops than a human writing using a keyboard instead of a pen. The general abilities of humans are very impressive to humans - I suspect dolphins think we are quite stupid and overly specialized given our inability to understand the language that dolphins use. So much of the brain is highly specialized - for example 30% typically used for vision yet we can’t just close our eyes and suddenly be 30% better at general human skills like math. That is telling us the opposite story - that there are parts of the neocortex that do have more general capabilities and parts of the neocortex that are more specialized. Again this seems to argue against the common algorithm.
I would really like to believe the idea of a common algorithm. It would seem to make general AI just around the corner. But I need more compelling arguments to support other theories of intelligence being abandoned. Perhaps my main concern is that the engineering approach of reverse engineering assumes that the brain is a machine and that, like human-made machines, it is composed of simple parts. But the reason we build machines that way is because we are too stupid to engineer complex systems like evolution does. In regards to this it will be interesting to see where Google gets by using AI to design digital circuits and AI algorithms, my suspicion is that the resulting circuits and algorithms will be impossible for humans to understand and radically outperform anything a human designer can come up with using compositional thinking.