Hi , i have an implementation for HTM , in which i got a mean Accuracy 62% for Temporal Memory predictions.Spatial Pooler has mean performance 74% .Because i don’t have access in a PC with RAM greater than 8GB my tests are for 512 columns - 8 cells per column.I see that your implementation has 2048/32 columns/cells. Is it possible to see much better results for Temporal Memory by increasing columns/cells ?
I think that’s probably large enough, but it depends on how complex the data is. Are you using sequence resets? I didn’t know about that when I made an HTM program and I got low accuracy, maybe for that reason. If you clear the sequence context at the start of a sequence, it might improve prediction accuracy.
You can save a lot of RAM by storing the indexes of active bits, rather than storing lists of 0s and 1s. Do you start with a bunch of dendritic segments, or create them as needed?
I found it hard to get the spatial pooler stable, especially with simple data. It might improve prediction accuracy if you stop training the spatial pooler after a little while, perhaps before the temporal memory starts learning.