I’m developing the spatial pooling algorithm for learning image recognition. For example, the learning process needs 3 to 4 iterations to stable state (which means that chosen columns are same over iterations). However, after around 50 iterations, the boosting factor affects to the process and the result of active columns is unstable. My question is: why do we need boosting when we know it would affect badly to the stability of our result. With the boosting factors, we will never reach the stable state of the learning if I understand it correctly.
Without boosting, you might learning some specific patterns very well, and always see those patterns, but miss encoding many other spatial patterns because the very few active minicolumns have already been overloaded learning those few patterns really well. When you enable boosting, you learn more patterns less well, but overall the performance seems to be much better because you can learn so many more spatial features. Be sure and watch the boosting HTM School if you haven’t yet.