Strange behaviour of predictions for decoder/encoder parameters

Hi , i have built an implementation of HTM. I got pretty good results in cell level for hotgym data set, but when i use same parameters for encoder,decoder : n,w (according to and number of total buckets, my predictions are bad.
When i change parameter values in decoder (decrease n,w ) and increase number of total buckets(this one is used as an input for SDR classifier) to get the target bucket for my predictions, results are much better (i’m taking RMSE value close to 11) for 5k seconds data set).
Is this normal? I couldn’t notice if there is a common approach in decoding process in nupic and i am a little bit confused.

Are you building an HTM implementation that is supposed to be the same as the NuPIC implementation or have you made modifications? If you intended it to be the same then there may be a bug in your implementation. If you made some changes it may be fine.

Ultimately, algorithm debugging requires you to follow the representations through each stage of the algorithm to make sure it is working as you expect.

I followed your algorithms to build my implementation. The only change is that i wrote my implementation in a different programming language and for some parts of algorithm i used a little bit more linear algebra. For decoding the predicted scalar value i used this formula :

predicted_val = (targetBucket * Raw_values_Range / NumberOfBuckets) + Raw_MIN_value
Where, targetBucket comes from prediction of Classifier(SoftMax fun) with Temporal Memory’s output as input.
Is this the formula NuPic uses?