Can HTM be thought of as a better-informed optimizer?

I’m going through a tensorflow tutorial in which they make a simple model. they’ve gotten to the point where they are explaining the importance of optimizing your W and b variables with respect to gradient decent.

I thought, “Oh, that’s a heuristic, and it’s blind. HTM modifies itself in an intelligent way, where it can see exactly why it modified itself (the particular patterns that were recognized in the particular context it’s in).”

So, is it safe to say that HTM excels because it’s ‘optimization’ method is ‘better informed’ than the optimization method of neural nets (gradient descent)? Or is my intuition a little naive?


Can you please explain how mathematical optimization could take place in HTMs as it does in ANNs or in any other way? Since there are no such variables in HTM that function like those in ANNs, how is optimization relevant here?