I just finished reading “A Thousand Brains” by Jeff Hawkins - great book (here is my review)! He identifies cortical columns in the human neocortex as the key part of intelligence, which implement reference frames (to form / store knowledge).
I wonder now if this is correct: his theory would be falsified if there is an intelligent brain on earth that lacks these cortical columns. I would argue certain birds (like corvids or parrots) and octopi also have advanced cognitive capabilities like tool use and metacognition. From a quick online search it seems they all exhibit some columnar structures as well - though I lack the expertise to understand if corvids, parrots and octopi all share similar cortical columnar brain structure.
In general one should learn more what structural parts in brains are truly essential to implement intelligence by looking for shared structures among intelligent species that mostly co-evolved independently.
Cortical columns are the visual manifestation of replication of a single functional neural unit. An ancestor has 100 units, but a genetic variation yields another with 200 units, it’s smarter and out-competes. Humans have something like a million units (columns), voila: intelligence.
But columns are just the mammal way. Any way will do as long as it manifests as a repeating unit.. The evolutionary drivers and the genetics are much the same across species, so if the advantage comes from intelligence, look for gene expression in repeating neural units as the solution, but very different paths to get there.
There are some recent papers about the emergence of geometric forms within neural networks, especially after grokking. Then the network can generalize because the test time data still falls on the geometric form.
In my view, from some experiments I did (evolution versus gradient descent), I view deep neural networks as hierarchical associative memory.
Because the memory is hierarchical you can even have factorized geometry. And quite a strong capacity to reason can spontaniously emerge.
Maybe there can be a “Hierarchical Memory is all you Need” paper.
Also the fact I am back on this forum means I didn’t earn a single red cent from AI in the past 6 months.
The cerebellum is the slow learner of instinct from the fast learning unit that is the neocortex. They work together where the hippocampus detects what a surprise is.
Can the neocortex create AGI without a cerebellum ? Yes, but it would be dysfunctional and very slow to act because the various attention type pools/grouping (dentate gyrus, thalamus, basal ganslia, etc. would need to itterate to derive what the cerebellum does in cascade much faster. The cerebellum learns the short cuts when sleeping and the neocortex is forming lateral hierachical constructs.
Can you have AGI with just a cerebellum, no. That’s where current AI is. It’s can’t learn integrated hierachical concept influences.
Can you have AGI with the current itterative LLM type process, no, because new context window input is not integrated into the hierachical memory. Plus, the current approach learns vastly inefficiently in reverse, not forward via the hippocampus route.