TemporalMemory running very slow after long training

I’m not aware that one exists in NuPIC.core’s TemporalMemory.

That sounds like a good idea. Thanks!

I knew it existed in numenta/nupic (py) repository in the old temporal pooler (TP class). TP is not there anymore, was it renamed, or completely removed? It’s c++ counterpart Cells4 (it was the same) still exists and does have applyGlobalDecay() (but I’m not sure if it would work, or ever did)

I have not run you example yet, but from the description of the problem: it sounds like your TM is going to have 100% anomaly. Between the random input data and the fact that isn’t enough activity to trigger a dendrite prediction.

(64 inputs X 10% sparsity = 6 active mini-columns)

Bursting mini-columns is not the fastest code path. I read through the source code for bursting and it says there are several known performance issues, and I’m sure that with a profiler I could find a few more issues. For example I see several calls to vector.erase(), which are slow and can often be avoided.

  • Method destroyMinPermanenceSynapses has a note saying:
  // Find cells one at a time. This is slow, but this code rarely runs, and it
  // needs to work around floating point differences between environments.```
  • Method growSynapses has a note saying:
  // It's possible to optimize this, swapping candidates to the end as
  // they're used. But this is awkward to mimic in other
  // implementations, especially because it requires iterating over
// the existing synapses in a particular order.
3 Likes

If I remember correctly. Cells4 is the non biologically possible TemporalPooler while TemporalMemory is the biological possible one.

[marty@zack algorithms]$ ls TemporalMemory.hpp 
TemporalMemory.hpp
[marty@zack algorithms]$ cat TemporalMemory.hpp | grep decay #nothing
[marty@zack algorithms]$ cat TemporalMemory.hpp | grep Decay #nothing

:neutral_face: Sorry about the misdirection about Cells4.cpp, that was advice I gave someone who was trying to do the same thing with the old “Backtracking TM”. For the new TM you guys are on the right track.

No prob.
Thinking about it, the synapse / segment decaying would be nice to have in TM.
Although I was a bit unclear too, decay unlearns a synapse when not used, in addition to the current unlearning when mismatched.

The new thing would be pruning/reusing of the old synapses, segments.

The way I have implemented this is by tagging each synapse with last used timestep, and then I only need to apply the decay when a synapse is next used (and can remove it at that time if it reaches zero permanence as well). Saves you some processing steps compared to iterating over every synapse to apply the decay.

I only need to apply the decay when a synapse is next used (and can remove it at that time

Yes, on the other hand you keep synapses created only once, likely the culprit observed here.

I’m thinking if decay could be a separate thread visiting synapses and lowering them, by the timestamp as you say

I’m not sure about NuPIC specifically, but my initial interpretation of extraneous synapses causing TM to run slower over time (versus just consuming more memory) is that the algorithm is iterating over all synapses somewhere.

HTM can be implemented without doing this. Instead of sampling from the receiving segments, you can update via pointers on the transmitting axons. Due to sparsity, this is a significant optimization. This strategy can be applied to both SP and TM.

3 Likes

… Then you can do LTD as biology does: if a cell fires but a postsynaptic connected cell don’t, you apply a tiny decrement in the permanence of the synapse. That way some unused synapses will go away progressively. LTD, at least from a biological perspective (i.e. activation of calcium-dependent phosphatases), I think is not being considered in NuPIC.

The “time-stamp” based approach can be similar to the biological synaptic pruning (which a different thing. Involves microglia and is tied to “early” stages of life). Pruning is a “key” process across all the nervous system (including central). To prune synapses seems like a memory “hog”. If you tag prune unused distal-segments (i.e those that haven’t produced a valid prediction in a very long time), I think it could be easier.

Or you can take this a step further and have a nightly maintenance phase where you shrink all synapses!

2 Likes

So what does it mean to reduce all memories at a constant rate?
If memories fall away at some constant rate you have a sliding window that essentially goes to zero after some passage of time. There is a notable exception in your teen years where you imprint on your culture; the last of the plastic learning phases.

This concept is explored here:

Understanding the reminiscence bump: A systematic review

https://www.researchgate.net/publication/329581259_Understanding_the_reminiscence_bump_A_systematic_review

And a “fluffier” exploration of the same concept:

The problem is that at “boot” time, input flow will modify how the temporal memory perceives the input data (via changes in proximal synapses). Therefore, you will never see again many “early” learned sequences. They are there using resources for nothing. I guess biology faces the same problem and solves it via synaptic pruning after the “initial” learning is done.

IMHO, all those psychophysical papers seems a bit “dangerous”. They are up in the above the “conscience” level in the hierarchy (and many unknown mechanisms can be involved). In the lowest levels of the hierarchy, the core algorithm should be the same.

3 Likes

Agreed. Measuring the “total” system performance does not tell you what is happening at the lower levels. It does provide a “black box” that give you limits that must be met overall.

For example: timing test of performance and response from presentation of a cue to taking some action give a hard limit in how fast processing must be happening. Knowing neuron firing rates (This has been measured many times) leads to the “100 step” rule that whatever the brain is doing must take less than 100 steps from input to output. We don’t know what those steps might be but we know that it can’t be some long iterative process. This rules out whole classes of possible processing algorithms.

The same thing applies with these memory tests - we know that whatever mechanism the brain uses for memory there are some definite memory time frames and forgetting rates. Theories that do not match up to this will have to explain why they are different.

1 Like

it reminded me of a talk, it is not very thorough, but I believe it is related and it shows that perhaps this is not exactly a problem.
What is a Thought? How the Brain Creates New Ideas | Henning Beck | TEDxHHL

Ok, a good example (for my reasoning :grinning: The assumption of “processing” time (as an equivalent metric of computational power) might be misleading. What if the brain is using extensively forwarding prediction to do that? You can cross many layers in the hierarchy really fast, but you don’t understand how the cortex is able to perform “accurate” predictions forward.

In my opinion, the key is in the learning process (i.e. how is done). That learning will progressively build the layout for your “total” system performance (in time __ across lifespan experience __ and space __ across the hierarchy __ ). Black box observations say you little about how the details are done. I think is much better start from below.

I can’t argue that working from “the bottom up” is a bad approach; my own works starts with the biology and builds from that point. This is one of the reasons I am invested with the basic HTM model - much of what it does is in close alignment with how I think the biology works.

I will offer that blind adherence to a strict top down or bottom up stance limits your navigation in the problem search-space. As you move from the known to the unknown each step adds a degree of uncertainty. At some point, the amount of uncertainty builds to the point where you really don’t know anything. Having some “goal” helps constrain the search space for faster convergence on a solution.

Each method should inform the other to aid in faster understanding.

BTW: the " Deep Predictive Learning: A Comprehensive Model of Three Visual Streams" paper postulates that this is exactly what is going on in the cortex/thalamus streams. The “forward” stream from the senses going up the hierarchy interacts with the guidance from the “reverse” pathways including the hypothalamus/thalamus/forebrain/cortex as a feedback or training signal. This is NOT the classic ANN back propagation but is instead more plausible local error processing. I highly recommend this paper to anyone thinking about system-level on-line learning. Not an easy read but well worth the effort.

1 Like

In NuPIC the TM does what you’re talking about, by using the Connections class. The connections class keeps a list of input axons and where all of their synapses go to, and it efficiently distributes each active input by iterating through just the synapses which the input connects to.

The SP does not use the connections class, instead using sparse matrices. In the community fork I’ve changed the SP to use the connections class and also optimized the connections class.

4 Likes

I agree. You need both. But start from the bottom :smile:

From there you need “high-level” observations/intuitions to constrain your infinite search space. Using the top as the only drive, you end with DNN v2.0 .

Using bottom as the only drive, you end with the Human Brain Project (BTW: I suggest you take a look to edX course of Markran Simulation Neurscience. They are an incredible work… but I don’t know if the will succeed only following that route).

Thanks for the paper. I read some parts some time ago. Without investing too much time in that, my two cents is that is way too complex (and too specific). It should be easier and valid not only for vision but for any part of the cortex. I like more S.M. Sherman papers :wink:

A post was merged into an existing topic: Deep Predictive Learning: A Comprehensive Model of Three Visual Streams