Why maxBoost is always 2.0?

Hi, I’ve talked about this before on gitter, I’ve used swarming on a good deal of test data and the swarm’s generated model_params.py file always assigned maxBoost the value 2.0. Once the model_params.py file was fed into the OPF and run on that same test data, the predicted and anomaly results were always worse than when I manually reassigned maxBoost to the value 1.0. Any word on what that might be? And could someone please explain what maxBoost does? I struggle to find any documentation about that parameter.

1 Like

maxBoost controls “boosting”, an artificial process of supporting columns that are becoming irrelevant to help achieving distributed-ness etc. And the process is broken, you will be able to find Issues on Github from me about it.

Setting it to 1 effectively disables boosting (the computation is: newValue = columnValue*boostFactor; where boostFactor >=1 & <= maxBoost). Your swarming is “strange”, many common settings (eg NAB benchmarks) use boosting disabled.

OT: boosting should be either fixed or deprecated.

4 Likes

Ah, ok, thank you very much. Glad to hear that the problem is not some setup error on my part.

Boosting is utilized in the Spatial Pooler (SP) to increase the activity of unused columns; the SP tries to use all of its columns. The boost value for a column is dynamically determined by how often a column is active relative to its neighbors, where the values fall between 1.0 and maxBoost for any given column. This boost value is multiplied by the column’s overlap with the inputs.

There are two boosting mechanism in SP learning: (1) if a column doesn’t become active enough, its boost value is increased, and (2) if a column’s connected synapses don’t overlap well with the input often, the synapse permanences are boosted. Both of these help columns learn connections to the input space, increasing the overlap for inactive columns.

maxBoost is usually set to 1.0. Scenarios with complex inputs, like vision, may call for higher values.

Hope this helps!

3 Likes

So this still demands an answer… why does swarming result in the wrong boosting value? Is there anything we should do about it?

Before we decide to “do something about it” we should determine if this is indeed true and if this is a problem.

maxBoost is described in the Spatial Pooler pseudocode [1]. As @alavin mentions at the end of his post, boosting is, by definition, only useful for really long and complex data sources where you need to optimally allocate the SP columns. For most simple streaming tasks it can be set to 1.0.

To my knowledge, swarming does not explore different values of boosting, just uses a default value of 2.0. I think that default should be set to 1.0. There is an existing NuPIC ticket [2] to improve the various swarm parameters, and this should be one of them. There are other parameters that could be improved in swarming.

[1] http://numenta.com/assets/pdf/biological-and-machine-intelligence/0.4/BaMI-Spatial-Pooler.pdf
[2] https://github.com/numenta/nupic/issues/2829

Thanks Subutai… I just created this subtask:

And updated:

This would be a super easy newbie issue for someone to contribute…

1 Like

Maybe. They need to be guided as to what the actual parameter settings should be and then they need to test it thoroughly on various datasets. I can try updating the issue with specific suggested ranges, but I won’t be able to do it right away.

2 Likes

Hi all,

I was experimenting with boosting last days and really wander how it can be useful in the real life scenarios. In fact, when boost happen, the SP briefly forgets its learned state or a number cycles.
Here you can see two examples. First one uses maxBoost = 10.0 and second one maxBoost = 1.0. It is obvious that higher boosting makes SP more unstable than lower boost value. But, even maxBoost=1 brings SP to oscillate. It changes completely the state of active columns for few cycles and then comes back to the previous state.

Figure shows continuous training of SP with encoded digit ‘3’ (scalar encoder) 25000 cycles. y axis shows overlap in percent between cycles t and t-1. x axis shows the cycle. All peaks (i.e.: y<60) are cycles when SP enters unstable state and changes the majority of active columns learned in previous cycles.

I understand the concept and idea behind boosting. But changing of learned state (entering unstable state) in a real world scenario is in my opinion No-Go. Even suggested image recognition with large data-sets is not really an useful option. Whatever the size of data-set is, SP will under these conditions always enter unstable state.
It would love to see some comments on this and to learn how other deal with this issue.

Thanks

You should try different max boost values. I seem to remember some scenarios where even as low as 0.2 can be all you need.