i have noticed the same issue as @nluu. As @sheiser1 has explained, this is all fine and correct. However, in the real life, bosting seems to be problematic.
For example, SP is learn a pattern very quickly in just few steps. However, once a boosting became active, SP briefly “forgets” all learned patterns and start learning again.
At some point of time (in some learning iteration step) SP learns patterns again and remains stable, until bosting becomes active again.
In real world applications, this means following:
Application learns patterns.
Bosting becomes active and SP forgets learned patterns
To recap, boosting is biologically cool, but how it is helpful in real world applications?
Currently discussing log boosting and its plausibility and impact on MNIST, and @vpuente shared great biological detail why boosting is used, but probably only in the early stages.
Could someone please try to replicate @rhyolight 's results with log boosting? The formula is simple
@mrcslws Thanks for sharing this, this dates ages back to a problem some NASA guy raised, a SP with boosting on a “dummy” input sequence will produce periodic boosts in activity. Which is unwanted.
Here by “inputs” you mean actual 25 different iptut patterns, not any bits, right?
Basically the issue is with over-provisioned SP.
I’d like to make this into a test.
My takeaway is, I understand what boosting should do and why it helps sometimes, on the other hand I don’t see clearly how it’s done biologically. And we have the edge cases here where boosting hurts.
What do you think of my proposal to remove boosting as is now, and replacce the functionality with synaptic death? Unused synapses are removed (solves over-provisioned issue above), this avoids “weak” columns by simply killing them off. Now we need a mechanism to add new column when all other are busy (well estabilished, connected) and we still lack the precision.