Apologies for the delayed reply! Yes I had seen the material on boosting.
It was my understanding (and please correct me if I’m wrong) boosting was developed as a top-down response to the problem of sparsity and regulating over-active cells. By that I mean, some columns were observed to be activated too frequently, or not enough, and some mechanism was required to achieve homeostatic regulation. Boosting was proposed and developed as a solution to that issue, but lacks the bottom-up fundamentals that are desirable in a model that seeks to mimic its biological counterpart.
It is my understanding the boost level is a modifier per column, that is applied regardless of what SDR might be active at the time (in other words, it ignores surrounding context). Is it correct to say the boost level is defined by a single scalar multiplier per minicolumn, rather than expressed as a function of inhibition from connections? If my understanding of this is wrong, you can probably skip the rest of the post :). If however this is how boosting is currently implemented, it doesn’t sound biologically plausible to have a column regulated up or down in this kind of global/absolute sense, regardless of what other columns are active. I suspect such a model would have too many unwanted side effects on previously learned data.
To give an example by way of conjecture… When we learn a second language, say an English speaker learning to hear/understand a tonal language, there is some level of linguistic overlap (SDR overlap). The subject will have to learn how to recognize those new tones, which will share column activity with their native English representations. If those shared columns were simply down-regulated with no care for context when deemed over-active, this sounds like something that would degrade or diminish the ability to understand English. Since as you down-regulate some of those shared columns, it necessarily results in some decay of the English SDR representations. In practice, people learning second languages appear to suffer no loss of ability to understand their native tongue, there doesn’t seem to be any of this decay whatsoever. Perhaps there’s so much redundancy, the effects can’t be observed, but I feel that unlikely.
Contrast that to the modulation through inhibition. You learn a new tonal language, and the new SDR for tones conditionally (contextually) suppresses those overactive shared columns. Remove the tones, and you’re back to English with no loss of fidelity. Having the boost level not fixed, but a function of the proximal (active) columns, means it can be selectively suppressed without deleterious side effects on previously learned data.
When I have more time (could be in the far future at this rate, but I hope to get there!), I might try to demonstrate this empirically. I would seek to create a metric for how much previously learnt SDRs were decayed (essentially forgotten) from boosting, then see if switching it to an inhibition modulating model made any difference.