Problem running newly saved / loaded SP & TM

I think the problem is only a language barrier. I will read the code better :wink:

I see your first example is where you computed 20,000 rows of input and saved both sp and tm. :+1:

I see your 2nd example is where you want to load the sp and tm instances you saved in the first script, but I do not see the code that actually loads them from the file system. Just like there is a writeToFile function on each, there is also a readFromFile function to get them back into memory. Also see the serialization guide. I hope that helps you!

I did that before this function:

with open("out_sp.tmp", "rb") as f1:
      sp2 = SpatialPooler.readFromFile(f1)
    with open("out_tm.tmp", "rb") as f2:
      tm2 = TemporalMemory.readFromFile(f2)
    classifier = SDRClassifier(
        steps  = [1],alpha=0.5050,verbosity= 0
    classifier1 = SDRClassifier(
        steps=[1], alpha=0.5050, verbosity=0
    classifier2 = SDRClassifier(
        steps=[1], alpha=0.5050, verbosity=0

So you are saying that when you run the 2nd script, nothing prints to the screen and there are no errors? If so, I think you should investigate using a debugging tool to find out where you are snagged.

Yes . I try investigate it. But maybe you advise me , what can I check at first. I will improve my English Skills :slight_smile:

I suggest you find a python debugger like pdb, or else keep adding print statements until you find out where the process got hung up.

Sorry. I want to talk about my question once again. I want to increase n in my ScalarEncoder :

baselineEncoder = ScalarEncoder(name = “baseline”,w = 21, n = 2625,minval= 51,maxval=75,forced= True)

flowEncoder = ScalarEncoder(name=“flow”, w=15, n=1050, minval=0, maxval=6,forced = True)
encodingWidth = (eventEncoder.getWidth()+flowEncoder.getWidth()+baselineEncoder.getWidth()

And I got the next errors :

File “”, line 76, in
sp2 = SpatialPooler.readFromFile(f1)
File “/home/japanes/calc/venv/local/lib/python2.7/site-packages/nupic/”, line 94, in readFromFile
proto = schema.read_packed(f)
File “capnp/lib/capnp.pyx”, line 2962, in capnp.lib.capnp._StructModule.read_packed (capnp/lib/capnp.cpp:61515)
File “capnp/lib/capnp.pyx”, line 3554, in capnp.lib.capnp._PackedFdMessageReader.init (capnp/lib/capnp.cpp:69069)
capnp.lib.capnp.KjException: capnp/serialize.c++:197: failed: expected totalWords <= options.traversalLimitInWords; Message is too large. To increase the limit on the receiving end, see capnp::ReaderOptions.
stack: 0x7f9e8dbb297b 0x7f9e8dbb2a1c 0x7f9e8daa4f87 0x4b669c 0x7f9e8da95d28 0x4b0c93 0x4c9f9f 0x4c2705 0x4ca088 0x4c2705 0x4c24a9 0x4f19ef 0x4ec372 0x4eaaf1 0x49e208 0x7f9ea5866830 0x49da59

Maybe I have problems with capnp.
Thanks for your help !

I don’t know about capnp but I notice that n value looks really high. I think n is usually like 10-12x the w, I’ve never seen anything like 1000x. That kind of jump means there’ll be no overlap between values and you may as well use a category encoder. I think having a big n like this makes a lot more compute work for the SP too. Do you maybe have 2625 Spatial Pooler columns in mind?

This may have nothing to do with the error though.

I don’t see this in your code example.

Yeah, I agree with you. It seems that n value looks really high. But experimentally way I find that this value is not bad. I want that my resolution would be 0.1,that why I had this value.
My spParams here:

inputWidth: 0
columnCount: 2048
spVerbosity: 0
spatialImp: cpp
globalInhibition: 1
localAreaDensity: -1.0
numActiveColumnsPerInhArea: 40
seed: 1956
potentialPct: 0.85
synPermConnected: 0.1
synPermActiveInc: 0.05
synPermInactiveDec: 0.000501
boostStrength: 3.0

with open(“out_sp.tmp”, “rb”) as f1:
sp2 = SpatialPooler.readFromFile(f1)
with open(“out_tm.tmp”, “rb”) as f2:
tm2 = TemporalMemory.readFromFile(f2)

Do you have already-saved models with a different value of n? Are those the ones you are trying to load from disk with a new n value? I can’t see all your code, so it is hard to tell what is wrong. It obviously has something to do with reading a model from disk and loading it into memory. Can you create new SPs with this n or is this only a problem when trying to load from disk?

It is only from loading. I have the same value when saving and loading. I didn’t change anything from code, which we discussed some weeks ago. That’s code is at the top of this topic. I can repeat my code.

So it must have to do with how the ScalarEncoder is configured to serialize with capnp. But this looks right, doesn’t it @scott?

I’m not sure what is wrong.

And I didn’t find solution about my previous problem , which I hadn’t inderences after loading my model.

The last place we left this, your code was hanging up somewhere, and you were going to find out where by using a debugger like pdb or using print statements. Were you able to find out where the process was hanging?

I took hex editor and check file ,which I created out_sp.tmp and I looked that file is not empty. I using print and got information about column_count .And I got the same value ,which I saved .Now I have not any ideas ,what can I check.Maybe can you advise me ,what can I check

Ok. I found my problem. I didn’t save my SDR classifiers. Now I do next step :

with open(“out_classifier.tmp”, “wb”) as f3:
with open(“out_classifier1.tmp”, “wb”) as f4:
with open(“out_classifier2.tmp”, “wb”) as f5:

** I loaded my SDR classifiers in another script:**

with open(“out_classifier.tmp”, “rb”) as f2:
classifier4 = SDRClassifier.readFromFile(f2)
with open(“out_classifier1.tmp”, “rb”) as f3:
classifier5= SDRClassifier.readFromFile(f3)
with open(“out_classifier2.tmp”, “rb”) as f4:
classifier6= SDRClassifier.readFromFile(f4)

But I didn’t get predictions. What I did wrong?

Thanks a lot!

What do you mean? You don’t get a model result object from the compute function?

I always got 0. But finally I resolved all questions about saving. Thank you very much for your helping. Thanks for spending your time.

I need more info, please. What is 0? What line of code returns what before saving, vs what after saving and running again?