Question about Precision of Temporal Memory

Hi , i am building an implementation of HTM . I am doing some testings with data sets and , while i am getting good (?)
Recall (88.7 %) and Prediction Error (17.86%) , Precision is really low ( < 9%) . I read in your papers (correct me if i am wrong) that synapses grow lead to low Precision . Should i worry for these extremely low Precision results?

Can someone give a hint?

It may help if you described your data and what exactly you’re trying to do. Is it some kind of classification? I think it may help to have a tangible sense of what Precision means precisely.

1 Like

I use the metric of Precision ( tp / tp + fp) in order to see the reliability of Temporal Memory (in my implementation of HTM system).
As true positives,and false positives i take the cells that were correct/false predicted to be 1 in current timestep.
Some data sets i use for testing are : Hot Gym , art_load_balancer_spikes , speed_7578 (they are included in numenta’s git). In all tests i had the same issue , few False Negatives (High Recall) , many False Positives (low Precision).

So you mean these false positives are happening at the cell level, right? Many cells are becoming predictive that do not then become active in the next time step?

In general I think this is sort of natural to HTM because it makes numerous predictions depending on the noise levels in the data. It also seems that this high false positives issue could be affected by certain parameters, such as the synaptic increment and decrement values and their activation thresholds. If the activation thresholds are lower then it doesn’t take as high of synaptic permanence values to put cells into the predictive state. You could raise that activation threshold value for one thing and see what affect it has. You could check the hotgym anomaly example within nupic for the parameter values there, which they say generally work well across many data sets.

2 Likes

Did you mean that “It doesn’t take as many active synapses to put cells into the predictive state.”? Or are you implying a correlation between synapse permanence values and activation thresholds?

1 Like

I meant that each individual synapse is more easily formed with a lower required permanence for formation. I realize that ‘activation threshold’ may actually refer to the number of active cells on a dendrite segment needed to activate that segment and make the cell predictive, so this parameter would also be relevant to the amount of false positives. With stricter requirements for synaptic permanence and dendritic activation fewer cells will become predictive, fewer predictions will be made and there will be fewer false positives.

I see, so you were referring to connection permanence rather than activation threshold. You are saying that if you increase the connection permanence or decrease the synaptic increment, the synapses would need more iterations from the same sample to get connected and it may result in less predictive cells and false positives in general. As you said, increasing the actual segment activation thresholds would also lead to less FP which would be my first suggestion. It makes sense now.

Yes exactly, thanks for the clarification.