So, I’m working on more visualizations for the SP to compare how learning improves the spatial representations produced by the SP. In this example, the same noisy encoding is passed into two different spatial poolers. One is random (learning is turned off), and the other is learning over time.
When there is no boost factor (
maxBoost=1.0), it looks great and shows very nicely how learning improves the SP’s output. But when I turn boosting on (
maxBoost=2.0), it is pretty obvious that the SP’s output representation changes drastically.
Now, I understand the point of boosting, but I don’t understand all the internals yet, so maybe this comment is off-base. But how can this be right? If the main goal of the SP is to maintain the overlap properties of the input space, and boosting changes the output representation so drastically, how can it still be doing a good job?