@ddigiorg This is awesome! Thanks for sharing.
- Dendritic Segment Activation Threshold (θ): 15
I suppose this is the number of synapses per segment that need to be active (= permanence exceeds threshold) for the segment to become active, correct?
- Initial Synaptic Permanence: 0.21
Here my understanding was that the initial permanence would be a random value chosen around the threshold. Citing the SP whitepaper:
> Prior to receiving any inputs, the code is initialized by computing a list of initial potential synapses for each column. This consists of a random set of inputs selected from the input space. Each input is represented by a synapse and assigned a random permanence value. The random permanence values are chosen with two criteria. First, the values are chosen to be in a small range around connectedPerm (the minimum permanence value at which a synapse is considered "connected").
Above refers to proximal synapse initialization though, but I assume distal would be the same.
- Synaptic Permanence Increment and Decrement: +/- 0.1
- Synaptic Permanence Decrement for Predicted Inactive Segments: 0.01
Why are 2 different increments/decrements needed? Didn't see that in the whitepaper.
- Maximum Number of Segments per Cell: 128
- Maximum Number of Synapses per Segment: 128
If these are the maximum values, what are the values you start with?
Also, I'd expect these values to have a relationship to the overall number of cells. So if the region has 2048*32=65k cells, the number of synapses would be defined as 0.2% of the total cell count. The reason I'm stressing this point is that in the implementation, this actually should not be a separate parameter to be set because it's automatically derived from the column/cell count.
Side Question: Why are permanence values floats from 0.0 to 1.0? Why not an int8 from 0 to 100?
With ints you could always have 100 steps only but with floats the range is endless depending on how small you set the increment (learning rate).