Unusual prediction behaviour in htm.js

First of all, I hope it’s appropriate to post this here - let me know if it should go elsewhere.

I’m using htm.js in a hobby project I’m working on, and seeing something I think might be a bug. I noticed the behaviour in my implementation, but eventually found I can reproduce it in the piano demo.

Notice that if you click a single piano key over and over, there will be a prediction of “(none)” at a gradually increasing interval. This occurs for any type of consistent pattern.

Through the browser debugger, I can see that for the single layer in the example, the distalInput reaches 20 predictiveCells after a few notes, stays there for a couple of notes, then goes back to empty. Then it leaps to 40 predictiveCells, but falls back to 20 and then empty. Each time it resets to a number 20 higher than the last initial count, it decays in batches of 20 until there are none. This is why the “(none)” prediction interval gradually increases.

I’ve played around with a bunch of the layer properties that looked relevant, but it doesn’t seem to impact this behaviour.

@Paul_Lamb have you ever encountered this?

I’ll go round 2 of debugging after I’ve had some sleep, but grateful for any input in the meantime.


Sure I’ll take a look and see if there is a bug.

Ah, yes, you have encountered the “repeating inputs” problem with HTM. What happens is it is not learning a sequence of “A repeating”, but instead it is learning “A’A’‘A’’‘A’’’‘A’’’’’…” When it reaches the end of the long sequence that it has learned, (A’’’’’ in the above example), there is no prediction (because nothing has happened after A in this context yet, so it bursts. This burst places A’’, A’’’, A’’’’, and A’’’’’ into predictive state, so next input predicts A’’’, A’’’’, A’’’’’. Next predicts A’’’’, A’’’’’, and finally A’’’’’, then next burst. Rinse and repeat. Each time through, the sequence of A’s gets a little longer.


I noticed this as well, it doesn’t matter if you hit a single key or if you hit a recurring pattern you will notice the bursting happening gradually, which i think is expected


I’m working on a strategy to solve this problem using a temporal pooling layer. A while back I talked about the basic strategy here. Sorry for the slow progress on this (I have lots of competing interests right now LOL)


@jimmyw You’ve found the main weakness of the TM. More thoughts, including the piano demo:

1 Like

Uh oh. Just realized that is still an old version which has a bug (that is why it stabilizes eventually). I’ll update the demos with latest code when I get home this evening.


Thanks for the discussion (and the live-streamed badge presentation!). I’m surprised I’ve never seen this before but it totally makes sense.

I guess there must be something to mitigate this effect in the anomaly detection tools like grok, or maybe it’s less of an issue in a noisier signal? When I look at the cpu_utilization sample in HTM studio there don’t seem to be any false positives of this nature.

1 Like

Yes, it is called BackTrackingTM. you can find it alongside the newer and more biologically accurate TemporalMemory algorithms. Most scalar anomaly detection models are using it.


You also have the option to use the connect-to-all method (Grow synapse from a predictive cell to all previously active cell, not just the selected active cell.). It forces TM to learn A-A-A instead of A-A’-A’’ unless the sequence itself suggest otherwise. It is not implemented in htm.js/NuPIC tho.


Updated. Might need a clean refresh if browser cashed the old .js files.

My intuition is that this by itself could lead to ambiguity (for example the C in ABCD vs XBCY). You would need to add something to counter this. One idea would be to tweak the learning algorithm so that if more than one cell in the same minicolumn are predicted, degrade all but the one most strongly connected to the previous activity (I’ve not had a chance to test this though).

Many thanks, Paul. Could I trouble you to update it at https://github.com/htm-community/htm.js ? (that’s where I’ve sourced it from)

1 Like

Yeah… It takes more runs to learn the difference between 2 sequences. But it will learn it eventually.

1 Like

It is updated there (the bug I mentioned was fixed in this commit). I had just forgotten to deploy the latest updates to that demo URL Matt posted above after pushing them.

1 Like

Tom aNube Here,

So it is my understanding of this email chain is that there is a problem with pattern chaining, or in other words “high order predictive learning”.

Since I’m new to this forum, could someone get me started with the basics and send me a URL pointing to the documentation, and Java code if you happen to know where it is, that describes the algorithm(s) of how the specific columns of the “one cell active per column.” see diagram: https://numenta.com/neuroscience-research/sequence-learning/ are determined.



Hi Tom, welcome to the forums!

A good place to start is here: Read this first

If Java is your thing, take a look at HTM.java: https://github.com/numenta/htm.java

1 Like

11 posts were merged into an existing topic: Exploring the “Repeating Inputs” problem