Is HTM simple neural network?

I explained the HTM algorithm to my students in class and they said that these formulas indicate a simple neural network and I didn’t know how to answer them.please help me convince them that this algorithm is very advanced and a rival to Deep Learning?

1 Like

You can call a lot of things a neural network. DL is made of neural networks. Brains are neural networks. HTMs are neural networks. “Simple” is relative.

I suggest you refer them to our papers, BAMI, or HTM School videos on Spatial Pooling and Temporal Memory.

1 Like

I’d say watch Matt’s awesome HTM School videos. There’s so much great material there with rich visualizations – you can cut together pieces of different episodes and that will probably instill the intuition as well as anything.

1 Like

Mathematically, the algorithm is fairly simple. Architecturally, its really complicated because HTM keeps evolving.

1 Like

Which formulas?

Did you mean artificial neural networks used in mainstream Machine Learning?

Did you get an explanation for their opinion? For example does simple here meant the HTM can be derived from neural networks (e.g. mathematically, or algorithmically)?

1 Like

Yes, exactly. They say this algorithm has very simple mathematically and is quite similar to simple neural networks (the mathematically not complicate), so it does not need to be implemented in Python and use the nupic library. they believe that they can easily implement this simple HTM mathematics in Matlab. How should I convince them to come to Python?

they believe HTM mathematics is simple so the algorithm is simple. When I say this is one of the best machine learning algorithms they want to prove for them.can you tell me some reason?They say these math formulas are easy to implement in Matlab, so why should we use the Nupic library and Python?

thanks, but can say me some reason here?


Mathematically, artificial neural networks are also simple, at its core, partial derivatives is employed to calculate errors for back propagation. HTM does not do any explicit error calculation like neural networks do, however it does biologically plausible operations to learn/encode/pass/predict/detect input information. The biological constraints here is by far one of the strictest out there. How cool is that.

I recommend watch the videos mentioned above and after that re-asses if you still want to convince your students.

HTM I think is an acquired taste, like beer it’s bitter at first but when you acquired the magic in every first gulp you get hooked. In HTM’s case when you start to realize that what makes it run are/is biologically plausible algorithms it opens a new door for you where you can potentially source your next machine learning algorithms and not only that but also gives you a great opportunity to learn about our brains which most people don’t give a care about. At least for me this is true. Also searxh for this forums and see what ordinary people are talking about especially in Neuroscience parts. You’ll be amazed. If your not, then it might not be your thing.

Forget your students learn HTM if you’re really interested. :sweat_smile:

Thank you. I have another question. the HTM algorithm itself is complex and complete, but it has a simple mathematical framework. Does this indicate the weakness of this algorithm? Is it OK to implement algorithms? As far as I know, every code is written based on mathematical equation, so since the mathematical equation of HTM is not perfect, its implementation is not very I true?

It doesn’t matter. Like you can’t use logistic regression to approximate a complex, hyper-dimensional latent space. Yet logistic regressing remains one of the most used learning algorithm. HTM doesn’t have to be perfect. It just have to do it’s job well.

Calculus and Linear Algebra is gazing you in the dark corners. :stuck_out_tongue_winking_eye:

Thank you for your comment. Can you please tell me How could they implement HTM and create python code without a mathematical framework and using only brain function?
Unfortunately, I still didn’t understand the HTM Python code correctly.

As a library author. I can tell you implementing HTM from scratch is not an easy task - see my thread Hierarchical Temporal Memory Agent in standard Reinforcement Learning Environment to see what I’ve gone though to build a working HTM implementation. Especially section 3 and 4. There’s two route you can go. a) Go the OO style and build classes of synapses and neurons. Then combine them to build HTM. Or b) Build a computational framework of HTM (this is what I did) mostly composed by sparse matrix and tensors. Then implement HTM over these concepts.

Option a) does not require a computational framework of HTM to build. While b) does. Also I’ld like to clarify that a mathematical framework is not a computational framework. A mathematical framework refers to a system which one can use to infer properties of the subject. While a computational framework refers to a system one use to guide the implementation of a certain algorithm.

TL;DR Implementing HTM from scratch is hard. Years of programming experience among other skills are needed to produce one. You should try to use ones built by others like HTM.core or Etaler. Both framework have their documentation and tutorial online. For HTM.core and for Etaler


Thank you. I just realized now.
It was interesting. the code of " [Hierarchical Temporal Memory Agent in standard Reinforcement Learning Environment" is different variation of HTM.core or just its implementation is different?

Two code base is completely different. HTM.core is a fork of NuPIC.core. And NuPIC.core was Numenta’s official implementation (no longer developed). While Etaler is a complete from scratch implementation.

To be clear. Both HTM.core and Etaler is a implementation of HTM (the theory and algorithm). But they are developed by different people and does not share a single line of code.

Edit: The code for my thread, Hierarchical Temporal Memory Agent in standard Reinforcement Learning Environment, uses Etaler as the base library to do HTM stuff.

1 Like

thanks a lot :pray:

Not at all. Math is fundamentally just a language to describe some thing. So please don’t simply judge an algorithm using its mathematical description.

What do you mean by not perfect?

Wrong. There is no such thing as correct or incorrect in the implementation because implementation is custom and intentional. If you are talking about correctness in terms of its physical analog (biology) then correct/incorrect is not the right terms. I would say HTM is a simulation of some algorithms/operations in the neocortex.

@shiva What are you trying to achieve here? I strongly recommend watching the HTM videos before you answer your students.


may be the best is to say what NN and HTM are and are not ?

I have quick comparison here : bbHTM

1. NN is mostly classification
2. NN sometimes used as a Lookup in RL
3. NN can not use sparse vectors (one-hot are not sparse)
4. NN output is at most ~10's of classes

1. HTM is temporal
2. HTM use sparse vectors

Basic characteristics of this NN approach are :

1. Batch learning
2. Backpropagation
3. SGD like fine tuning
4. Real valued data
5. Dense representation
6. Static - no concept of time
7. Computation
8. Learns by connections weights

The brain on the other hand does nothing like that, but :

1. Online learning
2. Binary data
3. Sparse representation
4. Dynamic - time is integral part
5. Pattern Memorization
6. Learns by Existence or non-existence of connections 


Thank you for your comment.