Problem Running HTMEngine Skeleton App

Also, please start from an empty skeleton DB and apply database migrations from this PR: https://github.com/htm-community/skeleton-htmengine-app/pull/14

Hi,
I followed the following link, I am able to perform the database migration successfully.
https://github.com/htm-community/skeleton-htmengine-app/issues/13

However, I cannot get anomaly score after running “python create_cpu_percent_model.py”. After running create_cpu_percent_model.py script, i’m getting below result:

meharu@meharu-X3V3:~/skeleton-htmengine-app$ python create_cpu_percent_model.py Model 422a972c3b8641e492b26aebe6624477 created...
meharu@meharu-X3V3:~$ mysql -u root skeleton --execute="select * from metric_data order by rowid desc limit 5"
+----------------------------------+-------+---------------------+--------------+-------------------+---------------+---------------+-----------------------------+
| uid                              | rowid | timestamp           | metric_value | raw_anomaly_score | anomaly_score | display_value | multi_step_best_predictions |
+----------------------------------+-------+---------------------+--------------+-------------------+---------------+---------------+-----------------------------+
| 422a972c3b8641e492b26aebe6624477 |    10 | 2017-01-03 08:41:58 |         29.4 |              NULL |          NULL |          NULL | NULL                        |
| 422a972c3b8641e492b26aebe6624477 |     9 | 2017-01-03 08:41:53 |         30.4 |              NULL |          NULL |          NULL | NULL                        |
| 422a972c3b8641e492b26aebe6624477 |     8 | 2017-01-03 08:41:48 |         32.9 |              NULL |          NULL |          NULL | NULL                        |
| 422a972c3b8641e492b26aebe6624477 |     7 | 2017-01-03 08:41:43 |           29 |              NULL |          NULL |          NULL | NULL                        |
| 422a972c3b8641e492b26aebe6624477 |     6 | 2017-01-03 08:41:37 |           32 |              NULL |          NULL |          NULL | NULL                        |
+----------------------------------+-------+---------------------+--------------+-------------------+---------------+---------------+-----------------------------+
meharu@meharu-X3V3:~$ mysql -u root skeleton --execute="select uid, name, description, status from metric where name = 'cpu_percent'"
+----------------------------------+-------------+---------------------------+--------+
| uid                              | name        | description               | status |
+----------------------------------+-------------+---------------------------+--------+
| 422a972c3b8641e492b26aebe6624477 | cpu_percent | Custom metric cpu_percent |      2 |
+----------------------------------+-------------+---------------------------+--------+

supervisor log:

 

Anomaly_service:

2017-01-03 08:28:41,700 - htmengine.anomaly(10381) - INFO - VER=0.0.0, SERVICE=ANOMALY  {TAG:ANOM.START} argv=['/home/meharu/numenta-apps/htmengine/htmengine/runtime/anomaly_service.py']
2017-01-03 08:28:41,731 - nta.utils.amqp.synchronous_amqp_client(10381) - INFO - Created consumer=Consumer(tag='channel-1-1', queue='skeleton.mswapper.results'); queue='skeleton.mswapper.results', noLocal=False, noAck=False, exclusive=False
2017-01-03 08:34:58,384 - htmengine.anomaly(4308) - INFO - VER=0.0.0, SERVICE=ANOMALY  {TAG:ANOM.START} argv=['/home/meharu/numenta-apps/htmengine/htmengine/runtime/anomaly_service.py']
2017-01-03 08:34:58,390 - nta.utils.amqp.synchronous_amqp_client(4308) - INFO - Created consumer=Consumer(tag='channel-1-1', queue='skeleton.mswapper.results'); queue='skeleton.mswapper.results', noLocal=False, noAck=False, exclusive=False

metric_listener:

2017-01-03 08:28:41,552 - __main__(10380) - INFO - VER=0.0.0  Starting with host=0.0.0.0, port=2003, protocol=plain, transport=tcp
2017-01-03 08:29:46,345 - __main__(13437) - INFO - VER=0.0.0  Starting with host=0.0.0.0, port=2003, protocol=plain, transport=tcp
2017-01-03 08:34:58,109 - __main__(4307) - INFO - VER=0.0.0  Starting with host=0.0.0.0, port=2003, protocol=plain, transport=tcp

metric_storer:

1 models from 1 batches.
2017-01-03 08:41:18,673 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.
2017-01-03 08:41:23,699 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.
2017-01-03 08:41:28,717 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.
2017-01-03 08:41:33,748 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.
2017-01-03 08:41:38,765 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.
2017-01-03 08:41:43,791 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.
2017-01-03 08:41:48,808 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.
2017-01-03 08:41:53,836 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.
2017-01-03 08:41:58,855 - __main__(4309) - INFO - VER=0.0.0  Processing 1 records for 1 models from 1 batches.

model_scheduler:

72c3b8641e492b26aebe6624477, stopPend=False, stopReq=False, modelFailed=False, exitStatus=None, modelRunner=ModelRunnerProxy<model=422a972c3b8641e492b26aebe6624477, pid=17684, returnCode=None>>
2017-01-03 08:55:14,467 - htmengine.model_swapper.slot_agent(4310) - ERROR - VER=0.0.0  SlotAgent<slotID=6, modelID=046214085e5a417b970038435382e922>: {TAG:SWAP.SA.MODEL.STOP.DONE} modelState=_CurrentModelState<modelID=046214085e5a417b970038435382e922, stopPend=True, stopReq=False, modelFailed=False, exitStatus=-11, modelRunner=ModelRunnerProxy<model=046214085e5a417b970038435382e922, pid=17664, returnCode=-11>>
2017-01-03 08:55:14,477 - htmengine.model_swapper.slot_agent(4310) - INFO - VER=0.0.0  SlotAgent<slotID=6, modelID=046214085e5a417b970038435382e922>: {TAG:SWAP.SA.MODEL.STARTED} modelState=_CurrentModelState<modelID=046214085e5a417b970038435382e922, stopPend=False, stopReq=False, modelFailed=False, exitStatus=None, modelRunner=ModelRunnerProxy<model=046214085e5a417b970038435382e922, pid=17697, returnCode=None>>

Appreciate your help on this. Thanks

Any solution for that ?

Same probelm persists with htmengine-traffic-tutorial. Actually, clearing the database and performing the migration would not clear the TypeError(‘1.0 is not JSON serializable’,) error.

Thank you

All the people responsible for that codebase are fully engaged with research or build improvement tasks at Numenta. They may wish to help in their free time, but Numenta is not paying them to. I am the only employee Numenta pays to work on open source and answer forum questions. Others help out (a lot!) but that’s not part of their job description. And while fostering this community and our OS surface area is my primary role, it is not my only role.

Now that I’m done making excuses for not helping, I’ll try to help you out! :wink: I’ve been updating documentation around here, and numenta-apps is on my list. As a part of my doc updates, I’ll take a look at them and make an attempt to get everything running. If I run into this error, I will post here. I still have some more work to do on NuPIC API Docs, the next HTM School episode, and editing a video with Jeff. Hopefully I can start next week. :grimacing:

I suspect this has something to do w/ attempting to serialize a numpy float16/float32. Consider this example from a python repl:

>>> import numpy
>>> import json
>>> json.dumps(numpy.float(1.0))
'1.0'
>>> json.dumps(numpy.float32(1.0))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 243, in dumps
    return _default_encoder.encode(obj)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 207, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 270, in iterencode
    return _iterencode(o, 0)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 184, in default
    raise TypeError(repr(o) + " is not JSON serializable")
TypeError: 1.0 is not JSON serializable
>>> json.dumps(numpy.float16(1.0))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 243, in dumps
    return _default_encoder.encode(obj)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 207, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 270, in iterencode
    return _iterencode(o, 0)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 184, in default
    raise TypeError(repr(o) + " is not JSON serializable")
TypeError: 1.0 is not JSON serializable
>>> json.dumps(numpy.float64(1.0))
'1.0'

I think you can fix the problem by casting to the appropriate numpy float type or the builtin float. Wherever the exception is being raised, it’s probably a numpy.float16 or numpy.float32.

1 Like

Look in htmengine/htmengine/model_swapper/model_swapper_interface.py:

Try changing the line self.anomalyScore = anomalyScore to self.anomalyScore = float(anomalyScore) and please post if it helped. I think that it should. If it does, then a numenta-apps PR that fixes this is most welcome.

2 Likes

Austin, Vitaly, Thanks for commenting. The float() type conversion in htmengine/htmengine/model_swapper/model_swapper_interface.py worked and the “is not JSON serializable” error is now gone.

@eyal.cohen, I submitted pull request https://github.com/numenta/numenta-apps/pull/920 to fix this.

@eyal.cohen
UPDATE the cast needs to be conditional in order to handle the error result scenario. See updated pull request https://github.com/numenta/numenta-apps/pull/920/files. This PR has now been merged, so you should be able to get this fix next time you sync up to master.

1 Like