Hi! I’m new to NuPIC. I want to serialize my SPRegion and TMRegion. What is the best way to do that? I tried to use this code:
# Saving SpatialPooler
sp_builder = PyRegionProto.new_message()
with open(save_path + 'sp.bin', 'w+b') as sp:
# Saving TemporalMemory
tm_builder = PyRegionProto.new_message()
with open(save_path + 'tm.bin', 'w+b') as tm:
# Loading SpatialPooler
with open(load_path + 'sp.bin', 'rb') as sp:
sp_builder = PyRegionProto.read(sp, traversal_limit_in_words=2**61)
# Loading TemporalMemory
with open(load_path + 'tm.bin', 'rb') as tm:
tm_builder = PyRegionProto.read(tm, traversal_limit_in_words=2**61)
But it seems it does not work
Please elaborate. Are there errors? It looks like you are trying to write to a stream, but expecting a file? Are you reading the serialization guide?
I currently attempt to serialize my network for better testing and generality. It uses the Network API and has 8 layers including next to the Spatial Pooler and Temporal Memory components from the htm-research repository (ExtendedTemporalMemory, UnionPooler) and Regions that are modified/created by myself.
I saw in another thread that the only way to serialize the network currently is to go through all the algorithms in each regions and save them separately and then load them again when creating a network.
Is there any sample code of serializing with the network API? Or hints from other people that attempted this?
How far is the New Serialization Plan realized?
If you’re using Python, if you’re trying to connecting different regions (perhaps running as separate scripts/threads), you may be interested in using Pyro4 library, which will take care of your serialization in the background. This library also allows you to distribute your app across different machines fairly easily.
In this way, you can write your classes as normal, expose them to the Pyro framework, then call them like normal in your code.
Re-read what you wrote… (Leaving above there, but realize I’m answering the wrong question). At least from testing, it looks like what you’re doing so far is the most reliable way to save your network. One approach you might take is is to write a saver/loader class that you pass your layers into, and which you receive your loaded layers from.
One idea though, from Pyro4, is to use the “serpent” serializer, which seems to work well across different versions of python.
Yes NUPIC switched to (py-)capnp for serialization but it seems not to be implemented very consistently yet.
There are a read and write methods for most objects and Proto files e.g. in htmresearch-core but I could not find any example that serializes a whole network of different components and loads it again using capnp.
I will try around if I can get it to work but any example/code for a whole network would be a great help
For the Community Fork we are removing capnp serialization entirly and relying only on YAML. This might be a little slower but it reduces the code complexity a lot. This should be able to handle multiple layers of Networks. Not sure when this will be ready however because real work keeps getting in the way.
Yes that might be the better option, but I serialized almost the whole network now with capnp in NUPIC.
Only some experimental components are missing.
Is there a way to serialize
SparseMatrixConnections that are used e.g. in experimental components like
apical_tiebreack_temporal_memory with pycapnp?
I found a Proto file for ApicalTieBreakProto in htmresearch experimental, but it seems like on the C++ side serialization for SparseMatrix is not implemented yet. Is there any easy workaround?
If it helps, here is what we test. Tell us if there is another object that needs adding to this list.
Oh great I should have seen that list before to choose supported components.
If there are any plans to add serialization support to the
SparseMatrixConnections class (wrapper for SparseMatrix in C++) that would help a lot.
Other than that I managed to serialize all my regions in python but algorithms build on top of this base classes will not be able to serialize without going seriously into the C++ codebase and internal algorithms.
From NUPIC components I wrote serialization for the temporal pooler region with union temporal pooler I will try to test it and share on github.
There are no plans. But how would it help? I thought we had all the components necessary to save an entire HTM network. Maybe @mrcslws or @scott know what you’re talking about.
apicalDependentTemporalMemory are components from htmresearch used for temporal memory with apical and distal connections which is easy to customize and has additional features. However they are not able to serialize without the C++ matrix component having read/write implemented. So it would help for building structures on top of this base component interfaces.
Or maybe they know if it is easy to switch with the ordinary connections component that is used in the ETM, I am not sure why the C++ SparseMatrix (in Nupic.core) was used and created in the first place and how much it differs but they are much more recent.