As suggested by rhyolight in another post, I made a copy of this file, renamed it as nupic-site.xml and updated the server password.
When I run swarm.py it shows: OperationalError: (2003, 'Can\'t connect to MySQL server on \'localhost\' ((1045, u"Access denied for user \'root\'@\'localhost\' (using password: NO)"))')
I tested the password and I am able to login to MySQL server thru the cmd line.
Does anyone know what’s going on?
I know it’s not suggested to run swarm, but I kind of need to modify the HotGym example to do my project. If it’s not solvable, anyone has any suggestion on how to bypass swarm to get HotGym prediction and anomaly detection run?
Once the nupic model has been initialized, it can receive data in an online fashion row by row. Importantly the data file must be compatible with the model_params, so if the model is expecting columns ‘x’ & ‘y’ from the params and receives data with columns ‘a’ & ‘b’ it won’t work.
The other big thing I’ve found is that the encoding parameters within the model_params should match the data. So if you’re using a simple scalar encoder with min/max = 0/100 and the actual data only range from 0-1, the outputs of the model will be mostly meaningless because the parameters are mis-scoped.
For anomaly detection the model_params will be slightly different. The key difference there is the inferenceType must be TemporalAnomaly instead of MultiStep or anything else.
I have already gone through the HotGym example in the repository, it helps a lot.
If I were to use the exact same model from the example for another dataset, which has a similar pattern with the hot gym power consumption data, they should be working okay? Or I should make adjustments manually on the model_param?
I’m sorry for the very basic questions. I’m an undergrad rushing for my final project and kind of overwhelmed by HTM. Thanks again for your help!
I would definitely adapt the model params to the other data set. If the distributions aren’t very similar you could get junk. You could just take the 5th & 95th percentiles of the metrics and use those for the encoder min/max values – or something simple like that.
Since HTM models are online learning a new model object will form and be continuously updated with each new row from your data set.