I am wondering: Is there any statistical component in the HTM-Learning algorithm? I mean besides the initialization (which is driven by a seed).
So when I ran 2 HTM models with the same seed on the same data: Are all the permanence, synapses, activations,… the same? And is this also true for the SDR classifier? and the anomaly likelihood/ log scores?
I can’t answer this for NuPIC specifically, but in general the HTM algorithms can certainly be written that way. The trick of course would be managing any processes designed to run asynchronously, otherwise the environment it is running in would introduce randomness.
It depends on how noisy the steps in between are, as well as some of the configuration params, such as the activation threshold and max new synapse count, as well as the learning/forgetting rates.
If the 20 steps are pure noise (RNG, etc) then regardless of the settings, it will just keep creating new branches to the original sequence trying to learn some pattern that doesn’t exist. On the other extreme, if the noise is low and activation threshold is not wired up too tightly, then it may have virtually no impact at all due to the high noise tolerance of SDRs and the one sequence would be learned.
If activation threshold and learning/forgetting rates are kept low, then the randomness in the noise in many cases will not be encountered frequently enough to outweigh the predictability of the non-noise, and it should converge onto the correct sequence.
I believe the permanence values, anomaly score and anomaly likelihoods should be the same run to run – if using the same data and hyperparam values. I think the difference is which exact cells are chosen as winners when TM bursting happens in columns. So the SDR should basically look different but act the same.
I imagine its behavior should be the same, if given the same TM states and input encoding buckets. I know there’s a traditional ANN involved here too which may generate some variation, not sure here…