So now I’ve read some Calvin.
I really like the way he writes, and he provides lots of interesting details and stories on each chapter. At the end of the day, however, I’m not convinced of the central thesis about grids.
I’m well aware that my reluctancy to accept it may be biased by the fact that such wave-based phenomenons, if intelligence turns out to be so inherently based on them, would be inconceivably more difficult to model that simple, almost-amenable-to-sequential layers and areas. Also, let me reiterate that I’m not an expert in any of this, so correct me if I’m misinterpreting something.
To begin with - but that may be something of a personal feeling - after reading “The Cerebral Code” and “How Brain Think” (another nice read, btw), I could not depart from the sensation that the darwinian process subtending his model was somehow shoehorned. Although quite intrigued at first with that proposition, by the end of this book, absolutely wanting to find all 6 darwinian-model-ingredients to the brain’s innerworkings looked like a hammer in search of a nail. A feeling even conveyed by the choice of the book’s very layout.
Inner-darwin aside, his reflexion upon the evolutionary side of the equation were insightful. And I see now why the idea of an evolving neocortex from a need of throwing accuracy matters to you, @bitking. Indeed he seems to have a point here, and the fact that his solution seems to involve his cortical grid concept tickles me.
However…
To get this self-reinforcement mechanism turn out to produce waves interference patterns which he then sparse out as “high points”, would require, in my view :
- First, [edit] a symmetry between connections to the proximal, “feedforward” part of the neighbouring neurons. Which does not seem to be the case as far as I can tell from my (limited) exposition to cortical columns flow diagrams. Most same-layer, lateral input from sensibly close columns seem to involve [edit, sorry] basal dendrites, which according to HTM is best described as resulting in a modulatory signal, effectively allowing cells that do have them to prevent cells that don’t from firing, all feedforward being equal.
- Second, we’d need a super-fine regulatory system for it to make any sense (maybe what he means by “automatic gain control” ? I’m not sure). I mean, cells should fire “whenever” a faint input ask them to, but also should fire “only” as part of the grid when surrounded by most of their 6 neighbors also firing ?
Also, 100-step ceiling would seem to be hit quite soon if a pattern carrying semantic value has to :
- settle from unsorted towards captured by an attractor
- propagate along the grid
- be detected as part of a spatiotemporal pattern
I’m also quite uncertain how the maths would solve out for the capacity side of the matter. I think I understand SDR statistical properties by now, but how lots of different attractors could share a common lattice I do not intuit well at all.
Besides, needing ridges and small passages and stuff for the diverging copies required for “speciation” left me a little puzzled, and long-range messaging stuff operating on similar copies also quite a bit. In other words, my curiosity about “H” was not really satisfied here.
All in all, thanks for those links, @bitking. As I said, even with such reserves there were still lots and lots of interesting details and things worth thinking about in those books.
Ah. Also. Something fun occurred to me here while reading the first chapter again :
One technological analogy is the hologram, but the brain seems unlikely to utilize phase information in the same way
or does it ? Grid cell modules and their “overlap” anyone ?
Regards,
Guillaume.