I think part of what makes HTM difficult to use and hard to adopt is the fact that it’s hard to get useful information out of it. An HTM network is forming its own representation of your data, and doesn’t care about making that representation easy for you to interpret. Which neuron will eventually learn which patterns is pretty much random.
So you might ask yourself; if neural activity is more or less random (in terms of which neurons map to which patterns), how does the brain perform complex actions? How does it coordinate neurons to fire in specific patterns?
The solution is reinforcement learning. This is something that Numenta seems to be putting off researching (probably because you can run into roadblocks when doing certain things that require all the sensory-motor inference stuff they’ve been working on over the past few years). It’s quite clear that the part of the brain performing this is most likely the Basal Ganglia, and I happen to have written a pretty lengthy set of posts a while back about how well-studied circuitry in the Basal Ganglia likely interacts with HTM to create this.
Reinforcment Learning, put simply, is training a machine learning algorithm to solve a task by giving it a “reward / punishment” signal. If it’s doing a good job, you give it a “reward” signal that tells it to keep doing what it’s doing. If it’s doing a bad job, you give it a “punishment” signal that makes it less likely to respond that way in that situation in the future.
For example, if you’re trying to get HTM to learn to predict branches, you need some population of neurons that fire when you expect the branch to be taken and another that fire when you expect it to not be taken. There may also be a group of neurons that don’t impact this at all (this happens in the brain and likely keeps track of patterns that don’t have a strict impact either way, but are still necessary for good predictions). What you need to do is introduce bias signals, likely through targeted inhibition, to get the outputs you want. Then to make a decision, you just count the number of neurons in each group that are firing. If there are more “taken” neurons firing than “not taken” neurons, then you predict that the branch will be taken, and vice-versa.
The way I suggest it in that post would require a second HTM layer (a hierarchy, I guess), for modelling a Striatum. This would probably be necessary to get prefetching working, but branch prediction could probably be done with just a Striatum layer (which is pretty similar, except that you have neurons segregated into 3 groups that respond to reward signals differently).