TemporalMemory running very slow after long training

sequence-memory

#1

I found that TemporalMemory runs slower and slower while training. Eventually becoming so slow that it is impartial to use. For example, taking 10 seconds to infer 1000 times after extensive training. At least 100 times slower than initially.

Is there a solution?


#2

This is something you’re going to need to experiment with, but you could trim segments occasionally. Just randomly remove some percentage of segments and see how it affects prediction performance vs cpu performance.


#3

@rhyolight I have gone trough both NuPIC and NuPIC.core 's core. Trimming seems to be only available for SpatialPooler but not for TemporalMemory. Have I overlooked anything?


#4

In my experiments I dont have any problem with runtime even in very long learning time, but it will go down if the max number of synapses per segments is to big. I observed this problem in my grid cells modules by increasing number of active synapses while learning . Frame rate decreases from 1000hz to 2hz for 6 GC modules by thing classification


#5

have gone trough both NuPIC and NuPIC.core 's core. Trimming seems to be only available for SpatialPooler but not for TemporalMemory. Have I overlooked anything?

Check out community developed nupic.cpp (which is nupic.core actively developed and with some improvements)

Coincidentally, we’re just now working on a PR for spatial pooler using connections for speed, and part of it is moving trim to Connections


#6

I’d be interested in hearing what the bottleneck was. Perhaps, at the code level, there’ll be a target for reasonable parallelisation if it make sense. Very slowly moving towards a Pony implementation. If anyone wants to join, welcome.


#7

I found that I can replicated the problem faster by sending random SDRs into a TM. Since sending random SDRs causes a lot of boosting. My guess is that there’s too much useless distal connections in the TM (Weight around 0, etc).
Segments trimming seems to be a possible solution. And parallelizing the computation definitely will help!


Concurrent HTM experiment in Pony: htm.pony
#8

Behavior based asynchronous actor-model language?

https://tutorial.ponylang.io/types/actors.html

Dmitry, I’m in! The language sounds as good or better than Visual Basic, which I still program in due to its fast asynchronous object based structure.

Can you recommend an editor (and whatever else is required) and easiest way to from there get Pony running your Github code?


#9

@Gary_Gaulin very glad to hear! Visual Studio Code has very good syntax support. The resulting binaries are C++ compatible. On windows one can simply debug Pony binaries with Visual Studio. On Windows, the Visual C++ SDK must be installed. On other platforms it’s even easier.


#10

I have been studying. With this link everyone is right away ready to start coding!

https://playground.ponylang.io/

For an experiment we now only need easy sample code like your encoder example, to copy from a reply you post in this forum into the Pony Playground.

One of the biggest problems in open source communities like this one is the difficulty getting started caused by requiring multiple downloads of programming related packages that after enough updates often no longer work together properly. What is needed is an easy as possible implementation of HTM to paste into a playground like that one.


#11

Let’s continue the Pony discussion in a separate thread: Concurrent HTM experiment in Pony: htm.pony


#12

Yes, the problem is likely that there are too many distal segments / synapses. There is no trimming going on as new patterns are learned.

There is a way to counter this, but it is deep inside the Nupic code. What you would do is adjust logic within the TM so that anytime new distal synapses are created, the same number of synapses are randomly deleted from the global pool of distal connections (above some maximum amount of synapses).

This sounds easy, but once you go looking in Cells4.cpp for how to actually do it, you see that is will take some serious testing to ensure you don’t break functionality.