Why is HTM Ignored by Google DeepMind?

In two words, deep learning. It’s kind of a taboo topic on this forum, but it shouldn’t be. It works. It’s also not that far from HTM. It really doesn’t deserve all the fear and derision that it gets from the HTM community.

Here’s what I have experienced as the one difference between deep learning and HTM.

It’s not online learning. Deep nets do this quite well by using a bastardized interpretation of hippocampal replay.

It’s actually not one-shot learning either. HTM does this by forming sparse connections. Gradient descent learning has to do this by running a bunch of training iterations per input sample, but both can do it equally well (actually backprop is better).

It’s not sequence learning, or multiple predictions, temporal hierarchy, efference copy for self predictions, or temporal-memory style context splitting. Recurrent deep networks have been used to do all this stuff.

It’s sparsity. Without sparsity of population activity, sparsity of connections, and independently plastic connection sites on neurons, you will catastrophically forget as a result of your dense gradient descent updates. No matter how big your replay minibatches are, they can’t be big enough to replay your entire life.

So I’m actually a big fan of deep learning, and even learning by gradient descent. But the magic of HTM that hasn’t been fully exploited in mainstream machine learning is the heavy application of sparsity in many forms, which will enable true online one-shot learning for the first time in machines.

6 Likes

I really don’t want Deep Learning to be a taboo topic here. I only want to keep those conversations in #other-topics and not #htm-theory. Please, don’t be afraid to bring up DL topics anywhere you like. But I may move them.

3 Likes

Skipping over 90% of the above, I can add one data point. A few weeks ago I was talking after a meeting with a someone from DeepMind about my proposed nucleic acid-based short-term memory for the cell (see off-topic post here), to give an idea of the level of rapport, and went on to describe HTM in my layman’s terms vis-a-vis ideas for living intelligence in another post here. His curiosity was piqued, and he asked me to send a link to HTM, then I remembered - it’s Numenta - and he said oh in a way that implied it was not new, and we went on to another topic. I wish I had asked more. So I think it is known, and has been discussed at least.

2 Likes

Are you saying that Deep Learning can be an “online” learner without training?

Does this mean that “labelled training data” in huge amounts isn’t needed in a huge preparatory step? And that once trained, the network can be used for solving solutions outside of that trained problem domain, like HTMs can be?

I think this greatly simplifies and understates the difference where an HTM can be “reused” for completely different problem domains and can totally “re-write” itself in realtime to handle a different problem?

HTMs don’t require pre-labelled and painstakingly acquired training data - they learn on the problem itself, so to speak…

I don’t ask these questions to be “argumentative” just kind of restating the advantages that make it worth working out the resource consumption issues and size related processing issues which are present in HTMs which are sized large enough to do “real” useful tasks? I think the “magic” of HTMs sets it extremely far apart from DL networks in reality? I just think we need to be patient and eventually the rough spots will be worked out?

1 Like

Oh and the most crucial issue to me is the fact that DL’s rate of innovation can’t keep pace with HTMs because there is no guarantee that the technology can be continuously extended until general AI is reached. Biological solutions have a roadmap, and as long as the rigor of biological constraint is dutifully kept, there is a full expectation of eventually making it to the end-goal of Strong AI.

This doesn’t mean we shouldn’t use DL, but that there should be more emphasis on grooming academic talent toward the development of HTMs or other biological strategies?

4 Likes

I think there’s actually a lot of overlap between how HTM and DL work, at least if you just look at the spatial pooler.

Compare this explanation of deep learning to this explanation of SDRs. In both cases, neurons are partitioning a possibility space and labeling a particular subspace as the space of recognized patterns. The difference is that DL uses linear algebra and topology while HTM uses combinatorics and binary vectors.

Then, take into account the fact that DL is starting to use less and less precision for synapses (DL has been one of the major reasons that AMD and nVidia are supporting half-precision floating point, and I think Google’s TPUs might even be using 8-bit values), and the fact that modern DL often uses sparse vectors, and I think it goes to show that HTM and DL have quite a bit in common.

The differences now are that HTM takes low-precision synapses to an extreme, and that HTM has temporal pooling and feedback while DL does not. Sure, DL has backprop, but that’s just what you need to do its topological transforms efficiently. I’m quite sure that you could swap out the learning algorithm for something else if you changed the way it partitions the input space, and still wind up with a very efficient learning algorithm.

That said, there is another big difference between HTM and DL; reinforcement learning. Currently, if you want to train a DL network to do a useful task, it’s very easy. If you want to train HTM to do the same thing, you have very few options. I think what HTM needs is some form of reinforcement learning. There’s plenty of evidence that it occurs in the brain; we just need to find out how it works. I’ve suggested a potential starting place before, so if anyone wants to try to implement reinforcement learning in HTM, while this may not be a perfectly biologically accurate method for reinforcement learning in HTM, but I think it would be a good place to start.

I would be working on this myself, but I’m too busy right now. I’ve read a lot about HTM, and made a few small toy implementations in the past, but I haven’t had time to get into NuPIC at all, so I’m not sure what it would take to do this with the current framework. If no one else is willing to try, I’ll probably get around to it in a few months. My main side project right now is an experimental compiler, so I’ll need some code to test it on anyway.

TL;DR: HTM and DL are very similar when you ignore the temporal pooler, and are mostly solving the same kind of classification problem. The main advantage DL has is reinforcement learning, making it very easy to train DL networks to do useful things. I’ve suggested a biologically-plausible way to implement this in HTM before, but lack the time to try it now. If anyone else wants, feel free to try to implement it, as it’s probably a good place to start.

No, I’m saying a deep network can be trained online, just like HTM is. See Deep Q-Networks for example. The form of experience replay they use may be distasteful if you want biological plausibility, but the fact that it successfully learns online is beyond doubt.

There’s a machine learning paradigm called “unsupervised learning”, in which you don’t need labels on your training data. Autoencoders, generative adversarial networks, self-supervised learning in the form of prediction. These are all ways to do machine learning without labelled training data. And then there’s reinforcement learning, as Charles points out.

A lot of people have demonstrated transfer learning in deep networks already, including myself, in which you use a network that was trained for a particular task, i.e. ImageNet classification, and use it for a new task, like robot navigation, with or without fine-tuning the weights.

And you seem to be suggesting that HTM is somehow better at this? The brain is better at this, sure. HTM in its current state absolutely does not in any way improve over the state of the art in transfer learning. This would be a big deal and there would be Nature papers to read if it did.

See my comments above. These properties are far from unique to HTM. And you greatly overstate how well HTM, in its current state, can actually do them.

Of course. It’s a work in progress. But a developing technology has never been done any favors by overstating its capabilities. That’s how you overhype things and alienate people who take very seriously the kind of claims you’re making.

There is no guarantee that DL can “keep pace” with HTMs, but there is also no guarantee that HTM is a correct theory of the neocortex. Furthermore there’s a reasonable possibility that, by ignoring the constraints of biology, machine learning will far outpace bio-inspired solutions. It would be an act of pure unjustified faith to claim otherwise. Obviously I think a lot of inspiration can be found in the brain. But it’s unquestionably not the only way to solve intelligence, because there exist an infinite number of equivalent algorithms to solve any particular problem (proof left as an exercise for the reader).

I’ve heard from the HTM community a lot of criticism of machine learning, and of deep learning in particular. I’m all for scientific criticism, but unfortunately the specific criticism tends to betray a shallow understanding of the field and the technology. I do recommend going out and implementing solutions to real problems to evaluate the merits of these different technologies. The hype over any particular idea can safely be ignored in favor of real performance on real problems.

After all this I feel the need to reiterate my commitment to HTM. I think forming sparse connections on independently plastic connection sites to sparse population activity patterns is going to be the key. So HTM is going in the right direction. But claiming anything more grandiose at this time is just marketing, and therefore can and should be ignored.

2 Likes

I’m not a trained data scientist, and you have given me some specific areas in which I can do my own research, thank you! :slight_smile:

I’m not sure what “experience replay” is in any depth, but a cursory survey of the topic doesn’t quite imply the same autonomy as biological network plasticity? https://arxiv.org/abs/1511.05952

I’ve heard of supervised learning, but my take on it is that it is very much dependent on situational assumptions within specific boundaries? And that “online” as meant by classical learners is a bit different?

I haven’t made any claims that haven’t been made by Jeff or representatives from Numenta (that’s where I got them from :slight_smile:) ? If my statements are exaggerated to mean perfect performance when I say HTMs can learn a different problem paradigm by just being presented with the data - than yes - but of course I never implied that the technology was complete and perfect in its performance?

If this were true, than we would be a lot further along in the 70 years we’ve been up to this, and there would have been no AI winter? Of course, there is the possibility that we can “invent” intelligence, but the evidence so far suggests that development is very slow if not stuttering in its expansion.

I’m not an anti-DL person, but I definitely think that HTM technology is the underdog not DL in terms of public sentiment. Personally I have simply been repeating what I have heard are the shortcomings of DL (and that is only to point out the advantages of HTM research), - and I definitely think that the academic environment acts like weeds killing off anything that opposes it - and that DL hasn’t received any bullying from HTM advocates and that it predominantly is the other way around.

I wish that I had investigated DL and classic NNs before I had heard about HTMs because it is difficult to devote any serious time to a curriculum that to me represents the past - it’s just hard to generate the necessary enthusiasm to learn technologies that don’t in my mind represent the optimum or ideal. I have been within a hair’s distance from signing up for some courses - and maybe I will so that I can get up to speed on the current state - we’ll see.

Again, I phrase my statements in question form (If you look at the original sentences you quoted), because I acknowledge that I am not a data scientist and want to actually learn with more granularity what the differences are - but I did do a quick cursory glance at the networks you mentioned and there are differences which imply more manual intervention (I think), than what is talked about in an HTM context?

Anyway we’re all here because we are advocates of the potential of HTMs, true.

I’m not sure what you mean, but experience replay works as follows. First a bit of background.

In both the brain, and in deep networks, learning is not immediate. HTM abstracts the forming and strengthening of synapses into a single step process, but it’s actually a multiple-step chain of biochemical interactions that requires many repetitions of the pattern you want to learn. So each time a neuron sees a pattern, its connections are only updated by a small amount, and many repetitions are required to fully solidify the pattern detector. Deep networks are trained in an analogous way, where each presentation of a pattern only updates the synapses a small amount.

So to actually learn anything, you need to replay your experiences many times. In the brain, this is believed to be done in the hippocampus, which is capable of very fast one-shot learning, but does not store patterns for long timescales. As you sleep, and at other relevant times, the hippocampus replays your experiences and gradually consolidates those patterns more permanently in parts of the brain such as the neocortex.

In deep networks, this is done by keeping a buffer of past experiences during training time, and randomly sampling from this buffer for each training update. This is done for two reasons. Firstly, learning is achieved through small changes so you need to see each experience multiple times to sufficiently adjust your connections. Second, in order to avoid biasing your network toward only recent experiences and forgetting the past (“catastrophic forgetting”), you need to sample evenly across your past experiences.

That’s experience replay. It’s not “part” of the network and it’s not used at evaluation time, it’s just a training-time mechanism for evenly sampling the possible things you can learn.

Do you mean unsupervised learning? In supervised learning, you are mapping inputs to known outputs, like classifying digits. In unsupervised learning, you are discovering structure in the input data, and encoding it with useful representations, like an SDR encoder or the spatial pooler. And online learning just means you’re processing an ongoing input stream of indefinite length, and updating your model as you go.

Machine learning is capable of generalizing remarkably well, but it always depends on the training data you give it. There’s no magic algorithm that can learn things it wasn’t shown. The same is true of HTM, and the brain.

I strongly disagree with this. Deep learning as we see it today was mostly invented around 1980. The problem is not with engineering approaches to AI, but we just didn’t have fast computers and tons of training data back then. And before anyone complains about the huge amount of training data required, take a second to think about how much training data humans have. There are 30 million seconds in a year, and if we assume we process input at 10Hz or more, that’s hundreds of millions of training examples per year, and humans don’t become useful until after more than a decade of training.

If you think development in AI is very slow right now, then I’m not sure what to tell you. Look at a newspaper? And almost none of the researchers driving this development are neuroscientists right now, many don’t know or care much at all about the brain. I think the brain is a useful source of inspiration. But it is absolutely not certain that it’s the only way, or even necessarily the best way, to make working AI systems.

That kind of thing in academia can seem like bullying. We call it rational criticism, and it’s the reason science works. It provides incentive to prove it beyond a doubt, and that’s a healthy goal for all developing technologies. No one can argue with you when your technology is beating all the competition. Until then, more work is needed!

I highly recommend doing that. I’ve obtained dozens of useful insights in the process of understanding how deep learning works. Check out the universal encoder I posted a while ago. It’s written in TensorFlow which is powerful enough to build any deep network you want, and my code should be easy to understand.

It’s very hard for me to understand how the very cutting edge of AI technology, beating every benchmark it’s been tried on, solving problems computer scientists wouldn’t have dreamed were possible to solve even ten years ago, can represent the past. I can only assume that sentiment is coming from inexperience. Definitely have a look at the technology. Play around with it. And take what you learn and use it to improve HTM!

1 Like

Yes I mistyped that. I meant to say “Unsupervised”. But I’m not talking about a “magic” algorithm, just that the network will start to learn changes in the input data and then start predicting that. Learning a new problem with HTMs doesn’t depend on a magic bullet training set that anticipates all future problem domains it might encounter, they merely start learning the current data being introduced. What in DL land can learn new data without a properly prepared training set? This is the way HTMs work - maybe not perfectly and yes maybe not accomplishing much given the processing constraint, but they “constantly” learn. Is there a DL circuit that does that? This is the major advantage that Jeff Hawkins talks about with HTMs, isn’t it? I did do some reading on the DQNs you referred to and some others, but I haven’t used these of course.

I don’t have the background to say definitively what the differences are (other than the obvious structural differences), but I also don’t absolutely trust that there is absolute parity between the capabilities of DL tech and HTMs, as I’m sure Jeff Hawkins wouldn’t be propounding HTMs as an advancement on the current state of ML technology in terms of the general applicability, online learning advantages.

But I am grateful that I can talk to Siri, and other NLP advancements, but you won’t get me in a self driving car in human traffic in the near future, I can tell you that! :stuck_out_tongue:

This is the essence of all online learning algorithms, DL and HTM included. The training set is the continuously arriving input stream. When the statistics of the input stream change, the model adjusts to fit the new patterns. DQN is one example of learning online*, and if the task domain changes (let’s say you change how rewards are delivered) then the model will adapt to that. In fact, this is the concept behind a technique in RL called “reward shaping”, where you give the network an easier task to solve at first in order to learn useful preliminary behaviors, and then progressively increase the difficulty of the tasks in order to coax it into learning more complex skills.

So HTM has no monopoly on adapting to a changing world. It does it in a much more biologically appealing way, certainly, and as a result it has the potential to explain the brain, which most deep networks do not. But the capability to adapt is not beyond mainstream machine learning. As I mentioned above however, HTM has a silver bullet that should in principle help it adapt more quickly to a changing world, which is the sparsity of its learning updates. So each HTM synaptic update is less likely to interfere with the previous things you’ve learned than would a dense update in a standard deep network, and you can be more aggressive with your updates as a result, therefore learning faster.

I really do view the benefits of sparsity as the sole performance advantage of HTM. But that advantage is beyond massive, so it is certainly worth exploiting.

(*DQN is a bit of a special case because of experience replay: when the task changes the replay buffer will get stale, full of data about the world as it used to be, rather than it is now. A better example would be A3C, the new-ish hotness in reinforcement learning, which does on-policy updates using the present experience instead of a replay buffer, in a much more “online” way.)

1 Like

Really enjoying this discussion, and very useful to me as a relative newcomer to HTM. Thank you!

I just wanted to pick up on something that @Charles_Rosenbauer mentioned in an earlier post:

Does anyone have any thoughts about this, or how we might approach a solution?

… that is, in addition to this post by @Charles_Rosenbauer which deals with some of the biological aspects.

This is one area I have been exploring recently, and there are others on the forum who are actively working on it as well. To me this is the obvious next function to understand after sensory-motor integration (or rather part of it IMO), so I wouldn’t be surprised if Numenta begins tackling it themselves in the not too distant future. You can follow my project on this thread. Lately I’ve I got a little side-tracked writing my own implementation of semantic folding to try and understand that concept better, but I’ll be getting back to RL soon.

2 Likes

“A significant component of the DQN training algorithm is a mechanism called experience replay [5]. Transitions experienced from interacting with the environment are stored in the experience replay memory. These transitions are then uniformly sampled from to train on in an offline manner. From a theoretical standpoint this breaks the strong temporal correlations that would affect learning online.”-torch Dueling Deep Q-Networks

That does not appear to be online.

“In both the brain, and in deep networks, learning is not immediate. HTM abstracts the forming and strengthening of synapses into a single step process, but it’s actually a multiple-step chain of biochemical interactions that requires many repetitions of the pattern you want to learn. So each time a neuron sees a pattern, its connections are only updated by a small amount, and many repetitions are required to fully solidify the pattern detector. Deep networks are trained in an analogous way, where each presentation of a pattern only updates the synapses a small amount.”

Within seconds brain tissue experiences structural changes that can affect performance. Even without a hyppocampus short term memory works, and learning can occur, it is just that it is quickly forgotten and not permanent. The issue with the need for the hippocampus probably has to do with metaplasticity rules in the brain. If the higher areas have neurons active over longer periods of time as compared to areas lower in the hierarchy, they must have different requirements to make changes to permanence. Metaplasticity can supposedly solve catastrophic forgetting and drastically improve the memory capacity of a neural system.

As regards online learning, the brain can learn while being active in the environment and quickly even within seconds change and adapt to novel information, without being taken offline, that is it can experience changes even drastic ones without interrupting waking activity. True some cases it takes time to improve performance, but in the simpler examples even drastic performance is possible in seconds.

“There are 30 million seconds in a year, and if we assume we process input at 10Hz or more, that’s hundreds of millions of training examples per year, and humans don’t become useful until after more than a decade of training.”

Those are not millions of unique labelled data, a baby may spends most hours asleep, the few hours awake it may spend lots of time looking at one or two toys perhaps even a blank wall. The number of unique voices and sentences can be quite limited. Yet in a few years it will eclipse most anything, and some can even do advanced mathematics and multiple languages.

1 Like

As far as I know, @Paul_Lamb and me are working on it for quite some time among the HTM forum. You can just click on his name and check out the discussions he is mainly involved in :slight_smile: Other than that there are the works of Otahal [1] and Gomez [2] which involve coupling HTM with reinforcement learning. So we are hopefully getting there :slight_smile:

I will present my MS thesis in 10 days named “Hierarchical Temporal Memory Based Autonomous Agent For Partially Observable Video Game Environments”. A real-time and online HTM architecture combined with TD(Lambda) and directed by the research on computational models of basal ganglia. It can solve simple navigation tasks in a 3D video game environment with some actual results via its visual sensor. I will for sure share it here when it is presented.

[1] https://dspace.cvut.cz/bitstream/handle/10467/21143/F3-DP-2014-Otahal-Marek-prace.pdf
[2] http://studentnet.cs.manchester.ac.uk/resources/library/3rd-year-projects/2016/antonio.sanchezgomez.pdf

5 Likes

I can’t wait to see it!

3 Likes

It works, right? I mean many people above have emphasized on more practical reasons but even if you neglect the pragmatics, HTM is capable of results even at this stage. When compared to cutting edge machine learning techniques, HTM is supposed to perform worse at a single problem but also supposed to perform with considerably great accuracy over multiple types of streaming data, inference types and problems without any parameter tweaks to its overall model, which I believe ML cannot do; HTM can, even at this point. Again, not hating on ML, even though I’d like to, because it gets marketed as AI. I felt as though you are not being objective and fair about this.
I also feel as though some noteworthy properties of HTM are singularly added to ML algorithms and they do work there. Not saying that those ideas are taken from HTM but that those principles are already working in HTM.
Apologies if it seems rude. I think it deserves clarification.

I see a lot of people saying that HTM doesn’t work, would anyone care to explain why DARPA built their own version in 2015 budget including plans in 2016 for dedicated hardware on those neural chips? Maybe we are getting the civilian version :sweat_smile:

“Develop a hierarchical temporal memory (HTM) algorithm including new data representations, low precision and ability to adapt and scale.”

1 Like

I think the confusion is with the use of the term “works”.

A metaphor would be like if your goal was the commercial transport of > 50 passengers by airliner. Classical ML Techniques would be like propeller airplanes, and HTM would be a Jet engine in development stages (not yet shipped with commercial airliners). Everybody “knows” the potential of Jets however more research and development are needed to make it viable for everyday application.

In HTM’s case, a lot more development is needed, but its development is very accelerated; potential is long-range; and expected to deliver on its promise.

The term “works” is not a valid one. HTM “works” but not to the level of its potential just yet. But it’s that GREAT potential that thrusts it into comparison with techniques that are more applicable at the time.

It’s like someone has the idea of a “wheel” and immediately you can see the (conceptual) improvement over “skids”, but then someone asks immediately if it works! Well you first have to make the thing before you can judge that! :slight_smile: HTM is more “developed” than that, but the same mechanism applies.

To me it seems there are two differences between neural networks and HTM the learning rule and data representation. Neural networks programmed by back propagation require many small changes to the weights because we do not know the changes to the neighboring weights will impact the weight we want to change we only know the sign of the needed change. In HTM we know the needed change.

The representation in HTM is doubly sparse and less shared, in neural networks more inter-meshed not clear to me how sparse.