Trying to implement HTM theory using Julia

I am new to HTM community. I think its awesome and I would like to build an implementation that is as close to the biological model as possible, from the ground up. I am still thinking about how to code individual aspects and what data structures and abstractions to use to carry out the algorithms and individual processes. Can you please give me some guidelines or suggestions?

The choice of language is Julia owing to its variety of data structures, performance and memory optimisations and also because this would be a great way to learn Julia.

I don’t want to take a look at the NuPIC repo and try to translate the code to Julia. That’s not what I am really after.
Instead I would like to grasp a few starting points and then build up by trail and error, while learning and experimenting with the overall theory. For example, allowing reinforcement learning abilities easily while building the core implementation.

Much appreciated.


I also learned HTM by working through my own implementation and asking questions from the community when I reached points that I didn’t understand (versus jumping to NuPIC for the answers immediately). I have to say that I am glad I went this route, because I feel much more comfortable with applying and adapting the core concepts from HTM (versus simply understanding how to use it for more traditional problems)

My strategy was to first watch every video by Numenta that I could find (some of Jeff’s talks in particular I watched several times), before I ever wrote a line of code. I then spent some weeks mulling the theory around in my head and figuring out where I knew I had obvious gaps in my understanding. I then began posting questions on the forum to help fill the gaps in my understanding.

Once I felt that I had a pretty good grasp of the theory, I then began writing my own implementation (before looking at the whitepaper just yet). My goal was to get stumped and identify remaining gaps in my understanding. I posted questions as needed to get help when I was stuck.

Once I had a simple working implementation of SP and TM, I then dove into the whitepaper to see where my implementation differed from the official theory. This highlighted more granular gaps in my understanding of the theory, and I also posted questions related to those when I needed to understand why Numenta went with a different approach on a particular point than I did.

It really wasn’t until I reached this point, where I (more or less) had a firm understanding of the theory, that I began diving into the NuPIC code. This is actually the point where I am now in the process. I have also explored a lot of tangential theories and because I wrote my own implementation of HTM it makes it pretty easy to test them out.


Thank you for replying @Paul_Lamb
Our approaches are very similar even though I am in the very early stages of the same.
I will begin the implementation soon along with @nivedita
Are there any key points you would like to share which are specifically related to coding the implementation?
Any hints to avoiding dead ends and memory hogs?
Any sort of advice is much appreciated.


sounds interesting and building it from the ground up is definitely a great way to learn HTM in its all depths.

I few sugestions:

  • you’ve posted this as a private message to a couple people, If you want to get more answers or even someone to join your effort, post it publicly here as a forum post.

  • Take a look in the organization , there are quite many forks in different languages, I even think there’s some work in Julia!

  • as Julia is, as I understand, highly parallel language; and you are starting from scratch, there is one thing I find interesting and that is hardly doable in current repos: HTM as a fully parallel, async system (each column design as a standalone unit that are then just executed)

1 Like

Thank you @breznak
I have posted a query about implementation aspect but it does make more sense to post this entire question on the forum. I’ll do it soon.
I would like to ask you all for advice here the same. Any input is much appreciated.

Sure, here are a couple of things that come to mind:

  1. Encode SDRs as an array of indexes to the “1” bits (versus a large boolean array)

  2. When scoring columns, iterate over the active cells in the input space to score their connected columns (versus iterating over every column to score them based on their connected active cells in the input space).

  3. For depolarizing cells in TM, transmit the activity while activating the cells at time T (versus iterating over all cells in active columns at time T+1).


Thanks @Paul_Lamb, will use some of those insights for re-work in the community nupic.cpp!
I definitely want the 1/ Sparse arrays.

As the author of HTM in another lang, did you use a variant of the “Connections” class? My Q is for using it in SP: Implement SpatialPooler with Connections backend

Thank you.
Will certainly make use of those suggestions. I was aware of 1 since Jeff Hawkins mentioned that in one of his talks. 2 and 3 are certainly interesting. Will try those first.

I had similar idea as to the implementation of SP and TM. But I had arrived at the same conclusion by thinking backwards that every TM columnar layer is essentially an SP and only a single SP unit(minicolumn) will be selected from the TM column as active after selecting winning “cells”.

For some potential dead-ends, read through this thread if you haven’t already. I proposed some potential optimizations, and there was some very good discussion on what the side effects would be.


@abshej Why did you send this as a private message to everyone instead of just using the forum? If you don’t mind, I’m just going to make this a public forum post in #htm-hackers.


My bad, I had a different thought process behind doing so.

I don’t mind at all. Please do so.

As far as getting started it may help to know the theory level behind the code first.

The Numenta site has an excellent resource page. If you have not seen it look here:

I got a lot out of the BAMI living book there:

Coding is an implementation level of the theory.

What do I mean by “level?” See the link to Marr’s three level in this post:

1 Like

Thank you.
I have read some parts of BaMI, HTM white paper and also some neuroscience research papers. Although, frankly, I didn’t finish reading any of them properly. I’ll go through them again. Thanks for the reference to the post. Will surely look into it.

A month in the laboratory can often save an hour in the library.
F. H. Westheimer

1 Like

Please ignore this. It is outdated.

1 Like

what is the go-to version? BaMI?

1 Like

There are mutliple ways to encode SDRs, with pros and cons.

A list of indexes has the smallest memory usage and is faster to iterate through, but determining if a certain bit in the SDR is active is time consuming. Even if you pre-sort the array and do a bisecting search, it takes O(log(N)) time per query, where N is the number of active bits in the SDR.

Dense boolean arrays use more memory and are slower to iterate through but determining if a certain bit in the SDR is active takes a constant amount of time. This operation is as fast as your systems RAM and does not change if you increase the number of bits in the SDR. Dense boolean arrays can also be quickly reshaped and sliced up in all sorts of ways.

Converting between these two representations takes a linear amount of time with respect to the number of neurons and the memory usage of a dense boolean array is 1 byte per cell.

An example of where both representations are useful is when the HTM learns: it iterates through the dendrite segments which should learn (an operation best suited for a list of indexes), and for each pre-synaptic input to a learning segment the HTM needs to know if that input was active (an operation best suited for a dense boolean array).

1 Like