I have my min and max values as the min and max of the actual scalars. Is this right, or should they be the min and max of the differences instead? Thanks,
I think he best way to go is do the differencing in preprocessing, then fed the differences values into NuPIC, treating them like any other numeric input. This allows you to use the Scalar Encoder, which the classifier is setup to work with.
Thank you sheiser1…Actually i tried with this approach sometime back and it gave me RMSE of 300 .This might be because of -ve (decreasing) and +ve(increasing) diff i got but not sure. Even though I want to model the differences ,but the output of the model will be actual value instead of diff. How will we do that ? may be using a post processing module where it ll keep track of the original value and add this to model output?
This seems the way to me. Take the forecasted differences from the model and add them to the current raw values (unprocessed input values that aren’t differenced). The current raw/ value + predicted diff = predicted next raw value.