I think the problem is only a language barrier. I will read the code better
I see your first example is where you computed 20,000 rows of input and saved both sp and tm.
I see your 2nd example is where you want to load the sp and tm instances you saved in the first script, but I do not see the code that actually loads them from the file system. Just like there is a writeToFile function on each, there is also a readFromFile function to get them back into memory. Also see the serialization guide. I hope that helps you!
So you are saying that when you run the 2nd script, nothing prints to the screen and there are no errors? If so, I think you should investigate using a debugging tool to find out where you are snagged.
File “experiment_load.py”, line 76, in
sp2 = SpatialPooler.readFromFile(f1)
File “/home/japanes/calc/venv/local/lib/python2.7/site-packages/nupic/serializable.py”, line 94, in readFromFile
proto = schema.read_packed(f)
File “capnp/lib/capnp.pyx”, line 2962, in capnp.lib.capnp._StructModule.read_packed (capnp/lib/capnp.cpp:61515)
File “capnp/lib/capnp.pyx”, line 3554, in capnp.lib.capnp._PackedFdMessageReader.init (capnp/lib/capnp.cpp:69069)
capnp.lib.capnp.KjException: capnp/serialize.c++:197: failed: expected totalWords <= options.traversalLimitInWords; Message is too large. To increase the limit on the receiving end, see capnp::ReaderOptions.
stack: 0x7f9e8dbb297b 0x7f9e8dbb2a1c 0x7f9e8daa4f87 0x4b669c 0x7f9e8da95d28 0x4b0c93 0x4c9f9f 0x4c2705 0x4ca088 0x4c2705 0x4c24a9 0x4f19ef 0x4ec372 0x4eaaf1 0x49e208 0x7f9ea5866830 0x49da59
Maybe I have problems with capnp.
Thanks for your help !
I don’t know about capnp but I notice that n value looks really high. I think n is usually like 10-12x the w, I’ve never seen anything like 1000x. That kind of jump means there’ll be no overlap between values and you may as well use a category encoder. I think having a big n like this makes a lot more compute work for the SP too. Do you maybe have 2625 Spatial Pooler columns in mind?
This may have nothing to do with the error though.
Yeah, I agree with you. It seems that n value looks really high. But experimentally way I find that this value is not bad. I want that my resolution would be 0.1,that why I had this value.
My spParams here:
Do you have already-saved models with a different value of n? Are those the ones you are trying to load from disk with a new n value? I can’t see all your code, so it is hard to tell what is wrong. It obviously has something to do with reading a model from disk and loading it into memory. Can you create new SPs with this n or is this only a problem when trying to load from disk?
It is only from loading. I have the same value when saving and loading. I didn’t change anything from code, which we discussed some weeks ago. That’s code is at the top of this topic. I can repeat my code.
The last place we left this, your code was hanging up somewhere, and you were going to find out where by using a debugger like pdb or using print statements. Were you able to find out where the process was hanging?
I took hex editor and check file ,which I created out_sp.tmp and I looked that file is not empty. I using print and got information about column_count .And I got the same value ,which I saved .Now I have not any ideas ,what can I check.Maybe can you advise me ,what can I check
Ok. I found my problem. I didn’t save my SDR classifiers. Now I do next step :
with open(“out_classifier.tmp”, “wb”) as f3:
classifier.writeToFile(f3)
with open(“out_classifier1.tmp”, “wb”) as f4:
classifier1.writeToFile(f4)
with open(“out_classifier2.tmp”, “wb”) as f5:
classifier2.writeToFile(f5)
** I loaded my SDR classifiers in another script:**
with open(“out_classifier.tmp”, “rb”) as f2:
classifier4 = SDRClassifier.readFromFile(f2)
with open(“out_classifier1.tmp”, “rb”) as f3:
classifier5= SDRClassifier.readFromFile(f3)
with open(“out_classifier2.tmp”, “rb”) as f4:
classifier6= SDRClassifier.readFromFile(f4)