Evaluation Metric [MAPE]

Hello All,

I’m confused about the MAPE formula used in the following paper by Numenta:
“Continuous Online Sequence Learning with an UnsupervisedNeural Network Model”

There is no reference to the formula! and it does not look to me reflect the “mean absolute percentage error”! Furthermore, the MAPE that I found in other published papers is as follows:

MAPE

At is the actual value, Ft is the forecast value, and n denotes the number of samples presented to the network.

Can anyone advice on this please?

Abdullah

So I asked @subutai about this, and he said it is defined in the Appendix (page 2500):

A.3 Evaluation of Model Performance in the Continuous Sequence Learning Task.

Thanks for taking time to ask Subutai. However, I read the paper several times! The paper does not clearly justify the reason.

In the paper, they mentioned, “First, we considered the mean absolute percentageerror (MAPE) metric, an error metric that is less sensitive to outliers than root mean squared error”, but here I’m not asking about the RMSE.

Thanks,

Abdullah

In my experiments I’ve used a relative error metric to evaluate TM predictions: MASE.
Essentially, it compares the prediction error of the investigated model with that of a “naive predictor” that predicts the current value at the next timestep.

Take a look at the favorable properties in the wikipedia link. Quoting:

Interpretability: The mean absolute scaled error can be easily interpreted, as values greater than one indicate that in-sample one-step forecasts from the naïve method perform better than the forecast values under consideration.

Thanks Oblynx for the suggestion.