Thanks. That history is a good perspective. However, the link to Hawkins’s critical followup presentation wasn’t immediately obvious.
If I correctly understand from Hawkins’s talk the current strategy at Numenta:
Until the “function” of the macrocolumn is understood, “bottom up” (in terms of the neurophysiology), revisiting the hierarchy, “top down” (in terms of global function), is likely to be unsound. The, apparently minor, critique I would have of this is that even while driving research from the neurophysiology, there is, nevertheless, always, a range of plausible global functions under consideration. I mean this is, after all, what Numenta has basically stated about perception: That it is biased – primed by the plausible interpretations aka “expectation”. The reason I say “apparently minor” is that it is, indeed, a minor critique in that I’m sure Hawkins et al understand this is what they’re doing and, quite reasonably, expect other to as well. However, I say “apparently” because it is always good practice in science to be explicit about the hypotheses being tested – not individually or absolutely (ie: Not in the Popperian “falsifiable” sense) – but relatively: Rank ordering in terms of plausibility given the ground truth observations, which, in this case, is the neurophysiology.
This requires a review of the literature of macrocolumn function.
My particular interest about “lexicon induction” is that my review of that literature leads me to believe the most promising macrocolumn function involves a given lexicon specific to each macrocolumn. Once this lexicon is established, global phenomena naturally emerge such as syntax, grammar and semantics, in natural language.
The mystery is how the lexicon is established for each macrocolumn and how the lexicons are factored between macrocolumns.
Of course, I’m not suggesting that Numenta drive their work on the macrocolumn function from this hypothesis because that would be all too Popperian. No, I’m suggesting a more relativistic Plattian “strong inference” approach.
In this regard, I would also strongly suggest taking very seriously “Universal Measures of Intelligence” involving Kolmogorov Complexity when establishing the relative plausibility of unsupervised learning hypotheses. I don’t believe any of the current major efforts in AGI are doing this, although it has been known to be the correct approach since the early 1960s with the papers by Solomonoff, Kolmogorov, Chaitin et al. These were, coincident with Platt’s paper on strong inference.
PS: The top of Google’s search results page for strong inference and Popper comes up with an execrable paper titled “Fifty years of J. R. Platt’s strong inference.” I won’t dissect it but will simply quote the last, very Popperian, sentence: “It [strong inference – jab] is a message that can benefit anyone who is interested in tackling difficult problems – we must be bold enough to assume that one of our ideas is correct [emphasis – jab], and yet we must have the humility to abandon those ideas that don’t stand up to scrutiny.”