Hello! Last weeks im dive into TM and at this day, i think, i have working TM which slightly different (more correct - slightly simpler) than original TM. Im made a lot of test and i gues my variant of TM works not bad, it can understand different contexts and can learn by one-shot.
But i cant solve few problems:
- When im use boosting, predictions of my HTM is very bad.
- When im dont use boosting, of course, i have only few active columns.
Small attention - in this post, when i say “boost” i mean boosting that used for make column-winner. I always use overlap-boosting, for cases when columns dont-overlap with inputs for long time. Sorry for confusion with terminology.
One of my tests - it stream of random numbers, 500 numbers in 80-90 range and 500 numbers in 20-30 range (cycled).
For example, this graph for non-boosted columns. Its not great (i think because my realisation of Random Scalar Encoder is terrible), but it have two peaks in moments when numbers range change and its good!
And boosted version. Its not working at all.
For boosting im use formula founded in this forum:
Math.log(activity) / Math.log(0.08)
My first question is - how i should boost columns for more efficient using and how save stability of predictions?
Each more problem of my TM - is “LEARNING_TRESHOLD” param. Like original TM, my TM have this param too and… How to determine the value of this parameter? After many experiments with my small HTM (12x12 region, each column contain 4 cells, max 5 active columns) i set it to 3. If segment contain >= 3 active cells, it can be connected to new currently active cell when column burst.
My second question is - its possible to calculate LEARNING_TRESHOLD param dynamicly, for all sizes of HTM-regions and any number of cells per column?