So. One of the problem that I met when I’m messing with NLP in HTM is that Temporal Memory although predicts all possible future. It doesn’t give me any sense of probability.
For example, this is the SDR that TM is predicting after feeding it a few letters. (The number indicates how many bit are on in a given category)
{ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
16., 0., 0., 0., 0., 0., 16., 0., 0., 0., 0., 0., 0., 0., 0.}
As you see. In the predicted SDR. Either all bits are off or all 16 bits are on. This property of TM removes the ability for me to estimate how strongly the next character possibly is. It would be nice that TM tells me that category 15 is possible but I’m not that sure. Instead of “Category 15 is possible above the threshold. So I guess It will be present.”
And so I purpose an solution. I think that mechanisms like global inhibition in the SP can help with this problem. By limiting to only allow the top neurons activating. TM can express probability by turning off some neurons that it has lower confident with.
Well… Is global inhibition in TM biologically even biologically possible? What else could such mechanisms imply?