Model writeToFile Error : Exceeded message traversal limit. See capnp::ReaderOptions

Hi
I am getting “Exceeded message traversal limit. See capnp::ReaderOptions.” while writing the model , Has anyone faced similar issue ?

Here is the error log

Traceback (most recent call last):
    model.writeToFile(f,True)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/serializable.py", line 116, in writeToFile
    self.write(proto)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/frameworks/opf/htm_prediction_model.py", line 1331, in write
    self._netInfo.net.write(proto.network)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/bindings/engine_internal.py", line 1294, in write
    reader = NetworkProto.from_bytes(self._writeAsCapnpPyBytes()) # copy
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/bindings/engine_internal.py", line 1310, in _writeAsCapnpPyBytes
    return _engine_internal.Network__writeAsCapnpPyBytes(self)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/bindings/engine_internal.py", line 2974, in writePyRegion
    getattr(region, methodName)(builderProto)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/bindings/regions/PyRegion.py", line 347, in write
    self.writeToProto(regionImpl)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/regions/sdr_classifier_region.py", line 340, in writeToProto
    self._sdrClassifier.write(proto.sdrClassifier)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/bindings/algorithms.py", line 3121, in write
    pyBuilder.from_dict(reader.to_dict())  # copy
  File "capnp/lib/capnp.pyx", line 1109, in capnp.lib.capnp._DynamicStructReader.to_dict (capnp/lib/capnp.cpp:24297)
  File "capnp/lib/capnp.pyx", line 900, in capnp.lib.capnp._to_dict (capnp/lib/capnp.cpp:20017)
  File "capnp/lib/capnp.pyx", line 864, in capnp.lib.capnp._to_dict (capnp/lib/capnp.cpp:19215)
  File "capnp/lib/capnp.pyx", line 900, in capnp.lib.capnp._to_dict (capnp/lib/capnp.cpp:20003)
  File "capnp/lib/capnp.pyx", line 1042, in capnp.lib.capnp._DynamicStructReader._get (capnp/lib/capnp.cpp:22686)
  File "capnp/lib/capnp.pyx", line 1043, in capnp.lib.capnp._DynamicStructReader._get (capnp/lib/capnp.cpp:22632)
capnp.lib.capnp.KjException: src/capnp/arena.c++:106: failed: Exceeded message traversal limit.  See capnp::ReaderOptions.
stack: 0x106ac2e34 0x106ac7af5 0x106a29f00 0x106a599a8 0x106a61d27 0x106a8f101 0x106a957b3 0x10693b2ba 0x10696f5ae 0x10691a1cf 0x1069320ca 0x106932623 0x10693206d 0x106971056 0x1000c6d4c 0x1000c6ba8

What object are you trying to serialize? Can you show us your code?

I have the same experience.

Using model.writeToFile or model.writeToCheckpoint() does not work for me.

However, using model.write() does work using python pickle().

This is using standard OPF code.

Are you guys trying to write the OPF model object? Because that won’t work. That model does not have a writeToFile function, and the writeToCheckpoint function is the older way (I usually just call model.save("path")).

To use capnp, you have to serialize more primitive objects. See the serialization guide.

Hi Matt(@rhyolight)

I am using the below code , same code is working fine for few items not working for few items

data = db.numenta_models.find({}); 

monitoring = MongoClient("local",27017)
monitoringdb = monitoring.monitoring
for model_param in data:

    model = ModelFactory.create(model_param["model"])
    model.enableLearning()
    model.enableInference({'predictedField': 'value'})
    metricName=model_param["_id"]
    anomalyLikelihood = anomaly_likelihood.AnomalyLikelihood()

    cursor = db.data.find({"metricName":metricName}).sort("timestamp", 1)
    for document in cursor:

      modelInput = dict(zip(["value", "timestamp"], [document["metricValue"], datetime.datetime.fromtimestamp(float(document["timestamp"]) )]))
      print  modelInput
      print float(modelInput["value"])
      print modelInput["timestamp"]

      result = model.run(modelInput)
      anomalyScore = result.inferences['anomalyScore']
      actualValue =result.rawInput['value']
      timestamp1=result.rawInput['timestamp']
      timestamp2 =int(timestamp1.strftime("%s"))
      print anomalyScore
      likelihood = anomalyLikelihood.anomalyProbability(
        modelInput["timestamp"], anomalyScore, modelInput["value"]
      )
      print likelihood

      millis = int(round(time.time() * 1000))
  



    with open("/Users/abc/Documents/"+model_param["_id"], "w+") as f:
        model.writeToFile(f,True)

@rhyolight

Yes, I try to save the OPF model directly. I use model.save() which works mostly. (except for anomaly likelihood state of course)

However, it only works for small models. Anything larger than than about n=800 on input encoders, the host machine runs out of memory and segfaults failing to save properly and corrupting the save pickled file.

Is there an example of code of decomposing the OPF network and serializing the parts individually and the reloading the network again with deserialization? That would be very helpful and save me a lot of time and allow me to use larger models without my machine crashing.

@wip_user For now, use model.save(), which is the old way.

No, but I can see that being useful. We would need to identify the network the OPF creates and find which components need serialization. We probably need some of @scott’s help with this again.

It’s hard to say how to handle without understanding what exactly is leading to this problem. You can certainly serialize components separately. It may be tricky with the network API since regions and networks have two-way references. But you could pull the algorithms out and save them separately.

This is what I’ve done in the past, but I never had to reconstruct a network with them. I assume we can just re-instantiate algorithm instances from disk and attach them to network components?

Yeah you should be able to manually set the algorithm instance that a region is using. So just create a new network, deserialize algorithms, and insert the deserialized versions in each region. The risk here is if the region keeps some additional state that needs to be in sync with the algorithm but this should be rare.

1 Like

The gain is that the new approach is way faster and more flexible. Seriously, I replaced REDIS for HTM state storage in the HTM School visualization server with it. The old serialization forced you to use the OPF, which is honestly old and faded IMHO. I’m hoping someone builds a better interface over the Network API for a specific area of interest.

If you are serializing your NuPIC algorithms, congratulations you are an expert user! :confetti_ball:

Every case at this point is going to be different. If you figure something out that works, please share it with everyone.


On another note, it seems like Exceeded message traversal limit. See capnp::ReaderOptions is a generic error likely thrown on writes? @wip_user Did you get this error saving or loading an object?

Hi Matt ,
I am facing issue with write to a file (model.writeToFile(f,True)) , happening with few models.

Where model is what type of object?

model = ModelFactory.create(model_param[“model”])

That is just not going to work. You will need to use the old save() function as I mentioned above.

Please tell us if there is any way we can improve the serialization guide.

Thanks Matt

1 Like

You can serialize the HTMPredictionModel through opf like this I believe:

proto = model.getSchema() #returns the correct capnproto file for builder
builder = proto.new_message()
print "Proto type is {}".format(type(proto))
model.write(builder)
_bytes = builder.to_bytes_packed()
f = open('serialized', 'w+')
f.write(_bytes)
f.close()

One model serialized for me (at 20 samples) was 2.9MB however, seems strange to me.