Another HTM test implementation


I decided to write a very basic implementation of HTM as an exercise to help me understand it better. Went with javascript for this first round. For reference, I put it up at:

You have controls for changing some basic parameters, then upon clicking Begin, you are presented with a few piano keys and a visual representation of the layer in action (red is active, yellow is predictive). Click the piano keys or type their equivalents (C, D, E, F, and G) to feed sequences into the system.

I apologize in advance for some incorrect terminology (I’ll update it after I do a little more study and memorization). I don’t have it on GitHub, so if anyone wants to look at the source, just use “view source” from the browser (warning in advance that the code is pretty ugly)

One thing that became apparent rather quickly is that memory usage is a big factor. I’m thinking that I could save some space by representing SDR arrays as a dense array of indexes (8 bits or so) to the 1’s. Or I could use even fewer bits by storing as relative indices (basically to step through from one relative index to the next). Of course I’ll need to switch to native code before I can try that.

This first implementation isn’t as accurate as I would like (but not terrible). I’m thinking it is likely due to the small number of mini columns possible without crashing the browser (tweaking some of the other properties would probably improve things as well). Could also be some logic bugs, of course. Another thing that is missing is any positional logic (connections to input as well as lateral connections between cells are initially chosen at random), so that no doubt has impact on how it behaves.

I’ll probably tinker around with it some more to see if I can improve it before starting on a native implementation.


I love it!

Yes, that is what we do. You can also subsample SDRs in some cases.

Very likely. You should have an SP with at least 2048 bits from our experience. (And yes, it did crash my browser when I tried that, even with only 8 cells per column).

What exactly do you mean by native implementation?


Sorry, I am an Android developer, so “native” is probably not a generally known term. Basically means compiled to run in the machines native architecture (for example C++ compiled into x86 or ARM binaries) versus an interpreted language like Java.


I updated this test implementation to make it more memory efficient (so 1024 columns no longer crashes the browser). I also made the implementation better match the correct process as my own understanding has improved. I think there are still some areas that I need to work on, though. I’ll probably continue to refine this example as I learn more through HTM School and conversations here on the forum.

  1. I noticed that if I make one constant for permanence changes (same value for both increases and decreases), the system is not very good at remembering (I get a lot of bursting even on simple sequences). Making the rate of increase higher than the rate of decrease seems to solve that problem. I’m thinking this problem stems from a gap in my understanding how Segments are supposed to be used. Right now I have designed it so that if any connection on a segment is above the connection threshold, the cell will go into the predictive state. This is essentially equivalent to a system where there are no segments and cells are directly connected to each other. This is the main reason why I am sure there is a gap in my understanding which is leading to unexpected behaviors like this one.

  2. I often see the same one or two columns bursting cycle after cycle on simple sequences even after many iterations. I think this is due to the 32 “max new synapses per step” being spent on other cells and never getting to the ones that need it, but I’ll have to dig in to verify if that’s what’s happening. I probably need a better system (a ranking or perhaps just more random) for deciding when and where to establish new segments and synapses.

Another concept I experimented with in this test is reversing the direction of the spatial pooling process to allow it to be used to predict the next input (versus maintaining averages tables). It seems to work well for simple sequences, but still needs some work. I’m thinking this concept could be utilized for deconstructing actions, if those actions are encoded along with the features. This could be used in a system that tries to repeat actions that would lead to a predicted outcome, if the inputs involved in those actions are being fed through a spatial pooling system.


Hello @Paul_Lamb, I’ll just think aloud with you on your remarks. When speaking about synapses, try to point our whether you are talking about distal or proximal synapses.

If we are talking about proximal dendrites (spatial pooling synapses), this is evidenced in Nupic default initialization parameters. Permanence increase is generally much higher than decrease. On distal dendrites (temporal memory synapses) this also provides a better learning in my problem domain but if I am not wrong Nupic default values are the same for increasing and decreasing for these synapses.

The idea of temporal memory is that a cell should be able to recognize subactivations inside the activations of a sequence and predict the next step. If you make it like you described, a single cell may be treated as a subactivation. I would speculate that this would undermine stability because it wouldn’t be noise tolerant enough (subsampling multiple cells provide this) and may cause constant changes in synapses.

This may be related to not having a proper distal activation threshold as I pointed above because the HTM would try to learn about small differences leading to oscillations (same synapses connecting and disconnecting constantly) even for a simple sequences.

I am not sure if you are interested in neuroscience but what you are describing is carried out by layers 5 and 6 in conjunction. Basically, layer 5 is responsible for producing the necessary behavior that would lead the system to a particular/chosen predicted activation by activating the necessary motor neurons through learning by association between layer 5 cells and motor cells. Layer 6 modulates these actions using the information provided from higher regions and thalamus.


Unfortunately, I am only a computer programmer and have virtually zero knowledge when it comes to biology. While I do visually recognize the difference between proximal and distal from various pictures of neurons around the forum here, I do not recognize the difference between them in implementation. So it will probably be better if I draw a picture to depict what I have implemented, rather than trying to explain it with only the terms (which I am likely still using incorrectly):

The grey arrows in the above depiction indicate the input side of the cells versus the output side.

I am using the same permanence increases and decreases for connections between inputs and columns as I am for connections between cells.

[quote=“sunguralikaan, post:5, topic:878”]
The idea of temporal memory is that a cell should be able to recognize subactivations inside the activations of a sequence and predict the next step. If you make it like you described, a single cell may be treated as a subactivation. I would speculate that this would undermine stability because it wouldn’t be noise tolerant enough (subsampling multiple cells provide this) and may cause constant changes in synapses.[/quote]
Could you describe the correct process for how input connections should influence the predictive state of a cell? I think this is the source of my problem, but I’m not clear on how it is supposed to work.

Basically gap in my understanding is if any active connection above the permanence threshold puts a cell into the predictive state, then this (one segment with two synapses):

would be functionally identical to this (two segments with one synapse each):

In which case one could throw out the implementation of “segments” entirely and instead implement direct cell-to-cell connections. Another way to word this conclusion is “max segments per cell = 128, max synapses per segment = 128” would be equivalent to “max segments per cell = 1, max synapses per segment = 16384” and “max segments per cell = 16384, max synapses per segment = 1”… Assuming of course that I am using these terms correctly.


To clarify my question, I of course know that part of the answer is that a cell should not become predictive unless the number of active connections is above some threshold (which can probably be configurable), rather than becoming predictive whenever any single connection is active. My question is more about the process of selecting and organizing the segments and which cells they are connected with.

For example, one way this could be done is when a cell becomes active when it wasn’t previously predictive, it could generate a new segment which connects to several of the previously active cells. Then if that segment later sees some threshold number of those connections active, it would put the cell into predictive state (and strengthen its active connections, and weaken the inactive ones). With this strategy, each segment would only have connections representing the cell’s position in a range of very similar sequences or contexts. I’m not sure this is the correct strategy though, so would like some feedback on how it should be done.


That picture is actually spot on :slight_smile: Only the labels need extension.

Distal Segments

The segments of the cells which control the predictive states. So these are segments that you drew below the cell on the top left side. There may be multiple distal segments. These segments are located at distal dendrites. That is why I kind of interchange segments with dendrites from time to time.

Proximal Segments

The segment which controls the active state of the column. There is only one proximal segment per column which is actually referred as proximal dendrite. Spatial pooling is actually the name of the functional process that involves the proximal dendrite.

Empirically you set higher permanence increase values than decrease values for the proximal dendrite synapses. Nupic does this and I am on the same page. As you have observed, if those two values are set as same, it becomes harder for a column to specialize onto specific patterns while the patterns are changing constantly.

The increase and decrease permanence values for the segments of the cells (distal segments) can be set equal. Nupic does this. Or the increase value could be higher just like proximal dendrites, I do this.

If we did have a single distal segment per cell you would be right. But what if we wanted a single cell to be receptive to different patterns. A cell can be predictive for different patterns involving a different subset of neurons. If all connections were in the same segment (or no segments at all) then how would you identify different patterns? Cells that belong to pattern A would stimulate the same segment that belong to pattern B. So for example insufficient amount of neurons (below the activation threshold) from both patterns would actually put the cell into predictive state. I hope it explains the need for multiple distal segments. As you know, proximal dendrite does not have segments.

Reading this made me think that you have not read the official algorithm in depth. If you haven’t yet, try to go through this guide that explains all the algorithm steps of HTM which would fill in a lot of gaps.

If I did not misunderstand, what you describe is exactly how Numenta creates newer distal segments on cells and it is called temporal memory. That is why I recommended an in depth read.

This link contains a fine collection of videos here and there.

Hope this helps.


Thanks, what I have written so far is based almost entirely on what I picked up from various videos of presentations and introductions to HTM (and then improved on after watching the HTM school videos and having discussions here on the forum). You are right, though, it is probably time to dig further into numenta’s implementation to fill the remaining gaps. That may sound obvious, but I tend to learn new concepts better by trying to implement them myself and working through the inevitable problems and asking questions of the experts (versus always starting with the answer, which I find doesn’t stick as well).


Haha, I just noticed this tooltip on one of the badges in my profile:

That was definitely the case when I was crashing the browser with version 1 implementation :blush:


I wrote a new view to help with debugging the temporal memory process, and thought it might be useful to share.

A quick note – I haven’t finished implementing Numenta’s TM process yet, so functionally it is still the same broken implementation from before. I just finished this UI to help me visualize it, so fixing the problems comes next :slight_smile:

Anyway, I thought something like this might be helpful for explaining some basic elements of the temporal memory process to someone new to HTM (like myself) without the added complexities of the spatial pooler or trying to visualize a 2048 X 32 grid.

The idea is that if you are dealing with a small, pre-defined set of semantically dissimilar inputs, and you do not use boosting in the spatial pooling phase, then you know ahead of time that there will be a specific set of columns active for each input. Because of sparsity, those columns represent a small subset of the total 2048 columns. Therefore, we can eliminate the spatial pooling phase, and draw a UI that shows only the columns that will ever become active. This gives a simpler view for visualizing the system.


I like it… I’m stealing your visualization idea. I’m also working on a HTM implementation to familiarize myself with the basic principles much like yourself, but in python with opengl for the visualization. Looking at ~2000 columns worth of nothingness isn’t easy on the eyes!


Oh, I forgot to mention, you can mouse over any colored cell to reveal lines indicating the connections between active and predictive cells. If no lines appear, it means there are no relevant connections.

If you mouse over a predictive cell, the blue lines point only to cells which are connected to the currently active dendrite segment (i.e. it doesn’t show lines for all segments). For the temporal memory process, the active segments are really the interesting ones. This can be used to indicate how well the input matches what the cell is predicting, for example, and reveal unexpected connections.

You can also mouse over an active cell, and the red lines point to all other cells which have dendrites connected with the active cell. This is less interesting, but for debugging purposes might be useful to track down issues where certain cells are not in the predictive state as expected.



Wow! That is really cool! I like it a lot!

I did notice that some active cells (red: which weren’t bursting) had no predictive lines being drawn to them? I’m sure since this is a work in progress, you are working on this but I thought I’d mention it. Very cool idea to use piano keys as input!


Yep, that is a product of an issue in my current implementation of TM. When you have the segment subsampling set to 50%, rather than each new segment connecting with a random 50% of previously active cells, they always connect to the first 50% in the array. Result is you end up with some cells being over-sampled and others not sampled at all. That is actually one of the problems that I might not have identified very easily without the new visualization.


I was going to implement the line connection visualization as well. Glad to see it works well in action.


I have begun refactoring this HTM implementation and demos, and pushing the cleaned up source code up to GitHub as I finish it (I called it HTM.js for now). There is still a quirk with the temporal memory logic that I’ll need to work out at some point (something is impacting sparsity for certain types of sequences… need to dig into the problem some more).

I’ve also set up links to the demos as I finish them. I’ve got the HTM Piano demo up there so far, and will have the TM-visualization version up next. Then will do my object recognition test after that.

I have also begun a new HTM implementation in Golang for my main project, but I’ll continue to update this javascript version for a while since it is nice for throwing together quick prototypes that are easy to share and get feedback.


Very cool demos!


I transferred repo to the HTM Community group on GitHub.