I am working on comparisons between LSTM using different data reduction techniques and models results given by the NAB. As my focus is computing cost saving, I am interested to know more in detail process that was followed to get given models results (like numenta, randomCutForest and skyline) and especially if there was some pre-processing applied on the data before using these models. For example, using buckets encoding as quantification for HTM or time aggregation. If I’m not mistaken, I did not find any specific information about that in the NAB whitepaper. Is there any track somewhere of these process related to NAB results?