So in our Pattern Pooler, we are not considering duty cycle to find over-active and under-active neurons. In fact, we are not interested in these properties at all. Hence, why we are not doing boosting.
To be honest, we could never get boosting to work and still be stable. The active neurons would thrash back and forth as the boosting would compensate for duty cycle. Calibrating the hyperparameters for boosting turned out to be a huge headache and the optimal parameters were always very specific to the particular network being studied. Thus, we did away with it completely.
In its place, we now allow neurons to be over-active and under-active as they please. Common patterns create a positive feedback loop and their mapping becomes “locked in”. The learning percentage parameter just ensures that this process doesn’t happen too quickly and allows the pattern mapping to stabilize.
Neurons that don’t have strong mappings will be quickly allocated when new patterns emerge. These new patterns will not affect the previously learned patterns represented by the “locked in” neurons.
This approach not only makes the pooled representations stable, but it also allows you to continue learning new patterns without affecting the old ones.
To be honest, we still need to do some research and experiments on the actual value of doing pooling at all. We’ve found that doing sequence learning directly on encoded inputs is often sufficient for applications. We need more empirical studies of how pooling affects the representation and what the benefits are.
Furthermore, our approach to pooling is very different from HTM spatial pooling which is designed to recover from lesions and maximizes the spatial distribution of a representation. We don’t really care about these things with BrainBlocks.
The feature we do care about is being able to build a representation for a new pattern. The WTA algorithm selects that best fitting neurons and reinforces them. The learning rate ensures that this new pattern wont steal all the neurons from a very similar pattern, and ensures that the other patterns will be able to “fight back” to ensure that their pooled output is sufficiently dissimilar.