I think the brief quote from Jeff Hawkins about prediction being required for intelligence (and the host’s subsequent comment about it being all that was required for intelligence) is a considerable over simplification and reflects only a small portion of the current thinking around HTM theory.
I am not one to claim to speak for Jeff, but I believe he has long said that it is both prediction and action that lead to intelligent behavior. And if we really want to get right down to it, it’s the ability to predict the effects of many different actions and then choose an appropriate action, that makes intelligent behavior possible. (Another of the hosts adds this caveat later on, but by then they had moved well beyond the aforementioned quote.)
I agree with you @CollinsEM. I am trying not being HTM-centric but it seems to me that they tried to define this beatiful theory just by firing one top Jeff quote. Note just for fun, there are like 15 other quotes from Jeff Hawkins on this site:
No, I’m with you. I have been trying to understand what he’s talking about for years now. And while I owe him the respect of his status, I find it highly suspicious that his explanations need to be so complicated.
If you drill down then it is all quite simple at bottom. Friston is applying the principle of least action to inference. He claims that the brain is a dynamical system and its trajectory through state space can be described by a kind of Lagrangian mechanics. What is the trajectory that minimizes some quantity?
Action is the difference between kinetic and potential energy. Then the path of least action in a mechanical system will have, on average, lower kinetic and higher potential energy than any other possible path.
The analogue to action in an inferential system (like the brain) is what Friston calls “variational free energy”, or the difference between model complexity and model accuracy. The path of least “free energy” will have lower complexity and higher accuracy, on average, than any other. It is related to concepts in algorithmic information theory like Kolmogorov complexity and minimum description length.
Note that Lagrangian and Newtonian mechanics describe the same trajectory. The Newtonian view works forward from forces, while the Lagrangian view works backward from constraints. If we are talking about a cannon ball in motion, then either approach will describe the same parabola.
Similarly, it is possible for Friston’s backward-looking, constraint-optimization view of inference to be equivalent to a forward-looking view that navigates the state space step-by-step from initial conditions.