Do initial permanences have to be part of a normal distribution?

From BaMI:

Each input is represented by a synapse and assigned a random permanence value. The random permanence values are chosen with two criteria. First, the values are chosen to be in a small range around connectedPerm, the minimum permanence value at which a synapse is considered “connected”. This enables potential synapses to become connected (or disconnected) after a small number of training iterations. Second, …

For example, if the permanence threshold is 0.2 then initial permanences should be from 0.1-0.3 but do those initial permanences have to also be part of a normal distribution? Meaning most of them being close to 0.2 and a lot less of them closer to 0.1 and 0.3.

1 Like

From my experience HTM is a pretty robust learning algorithm, so the initial values do not matter too much.

Even in the case of all synapses being off (permanences all zero), the k-Winner-Takes-All algorithm will select some neurons that fire (randomly). And as long as there was a pattern observable the synapses connected to those input bits will get their permanences increased.

In contrast to almost all machine learning tools, in HTM it does not really matter too much, what the initial distribution of the permanence values is, as the algorithm will quickly change those into the right direction.

1 Like