Thanks. I guess that I’ll run it first and then plot the results on MS-Excel or something similar.
Just another question that is more philosophic really:
From what I understand, if I would like to run the anomaly detection on a different data that has different columns in the CSV and “behaves” differently I should run it again through the swarm to generate the correct parameters yaml file. To me, it seemes a bit simmilar to what “conventional” machine learning techniques do: they run a batch process every once in a while to choose the best algorithm for the data and then uses the algorithm found to process the data at near-realtime. According to the critisism that Jeff Hawkins daid in one of his talks and in his book, our brain doesn’t work like that so, does that mean that the swarming process is a temporary thing that you hope to remove the need for it in future releases?