KjException when serializing

Ok, so something you’re doing in your script that is not in my script is breaking serialization. To figure it out, you need to start updating my script a little bit at a time, changing it to be more like your use-case. Where does it break?

Sorry, for my delay. I try to add my code to your script. Almost all right.B ut I got some mistakes. I can save SP, but I got errors when I tried to write TM.
Below my code :

     PARAMS_PATH =  ("model_iot.yaml")

    with open(PARAMS_PATH, "r") as f:
        modelParams = yaml.safe_load(f)["modelParams"]
        # enParams = modelParams["sensorParams"]["encoders"]
        spParams = modelParams["spParams"]
        tmParams = modelParams["tmParams"]


    eventEncoder = ScalarEncoder(name="event", w=7, n=14, minval=0, maxval=1,forced=True)
    eventEncoder1 = ScalarEncoder(name="event1", w=7, n=14, minval=0, maxval=1,forced=True)
    eventEncoder7 = ScalarEncoder(name="event7", w=7, n=14, minval=0, maxval=1,forced=True)
    eventEncoder2 = ScalarEncoder(name="event2", w=7, n=14, minval=0, maxval=1,forced=True)

    baselineEncoder = ScalarEncoder(name = "baseline",w = 21, n = 377,minval= 49,maxval=62,forced= True)


    flowEncoder = ScalarEncoder(name="flow", w=13, n=169, minval=0, maxval=13,forced = True)
    encodingWidth = (eventEncoder.getWidth()+flowEncoder.getWidth()+baselineEncoder.getWidth()
                     +eventEncoder1.getWidth()+flowEncoder.getWidth()+baselineEncoder.getWidth()
                     +eventEncoder2.getWidth()+flowEncoder.getWidth()+baselineEncoder.getWidth())
    sp = SpatialPooler(
        inputDimensions=(encodingWidth,),
        columnDimensions=(spParams["columnCount"],),
        potentialPct=spParams["potentialPct"],
        potentialRadius=encodingWidth,
        globalInhibition=spParams["globalInhibition"],
        localAreaDensity=spParams["localAreaDensity"],
        numActiveColumnsPerInhArea=spParams["numActiveColumnsPerInhArea"],
        synPermInactiveDec=spParams["synPermInactiveDec"],
        synPermActiveInc=spParams["synPermActiveInc"],
        synPermConnected=spParams["synPermConnected"],
        boostStrength=spParams["boostStrength"],
        seed=spParams["seed"],
        wrapAround=True
    )
    tm = TemporalMemory(
        columnDimensions=(tmParams["columnCount"],),
        cellsPerColumn=tmParams["cellsPerColumn"],
        activationThreshold=tmParams["activationThreshold"],
        initialPermanence=tmParams["initialPerm"],
        connectedPermanence=spParams["synPermConnected"],
        minThreshold=tmParams["minThreshold"],
        maxNewSynapseCount=tmParams["newSynapseCount"],
        permanenceIncrement=tmParams["permanenceInc"],
        permanenceDecrement=tmParams["permanenceDec"],
        predictedSegmentDecrement=tmParams["predictedSegmentDecrement"],
        maxSegmentsPerCell=tmParams["maxSegmentsPerCell"],
        maxSynapsesPerSegment=tmParams["maxSynapsesPerSegment"],
        seed=tmParams["seed"]
    )
    classifier = SDRClassifierFactory.create()
    classifier1 = SDRClassifierFactory.create()
    classifier7 = SDRClassifierFactory.create()
    classifier2 = SDRClassifierFactory.create()
    def testWriteSp(numRecords):
      learning_time = time()
      with open("test3.csv", "r") as fin:
        reader = csv.reader(fin)
        headers = reader.next()
        reader.next()
        reader.next()

        for count, record in enumerate(reader):
          print "Count",count
          if count >= numRecords: break

          # Convert data string into Python date object.
          #dateString = datetime.datetime.strptime(record[0], "%m/%d/%y %H:%M")
          # Convert data value string into float.
          event_value = float(record[2]) # device 1
          event_value_3 = float(record[4]) # device 3
          event_value_2 = float(record[3]) #device 2
          # event_value_7 = float(record[8]) # device 7
          bezline_all = float(record[10])
          pres_data    = float(record[11])
          flow_value  = float(record[0])
          # To encode, we need to provide zero-filled numpy arrays for the encoders
          # to populate.
          eventBits = numpy.zeros(eventEncoder.getWidth())
          eventBits_2 = numpy.zeros(eventEncoder2.getWidth())
          eventBits_3 = numpy.zeros(eventEncoder1.getWidth())


          baseline_Bits = numpy.zeros(baselineEncoder.getWidth())
          flowBits = numpy.zeros(flowEncoder.getWidth())


          # Now we call the encoders to create bit representations for each value.
          eventEncoder.encodeIntoArray(event_value, eventBits)
          eventEncoder1.encodeIntoArray(event_value_3,eventBits_3)
          eventEncoder2.encodeIntoArray(event_value_2,eventBits_2)


          baselineEncoder.encodeIntoArray(bezline_all,baseline_Bits)
          flowEncoder.encodeIntoArray(flow_value, flowBits)

          # Concatenate all these encodings into one large encoding for Spatial
          # Pooling.
          encoding = numpy.concatenate(
            [eventBits,flowBits,baseline_Bits,eventBits_2,flowBits,baseline_Bits,eventBits_3,flowBits,baseline_Bits]
          )
          # Create an array to represent active columns, all initially zero. This
          # will be populated by the compute method below. It must have the same
          # dimensions as the Spatial Pooler.
          activeColumns = numpy.zeros(spParams["columnCount"])
          # Execute Spatial Pooling algorithm over input space.

          sp.compute(encoding,True,activeColumns)

          activeColumnIndices = numpy.nonzero(activeColumns)[0]


          # Execute Temporal Memory algorithm over active mini-columns.
          tm.compute(activeColumnIndices, learn=False)

          activeCells = tm.getActiveCells()
          # Get the bucket info for this input value for classification.

          bucketIdx = eventEncoder.getBucketIndices(event_value)[0]
          bucketIdx_2 = eventEncoder2.getBucketIndices(event_value_2)[0]
          bucketIdx_3 = eventEncoder1.getBucketIndices(event_value_3)[0]
          # Run classifier to translate active cells back to scalar value.
          classifierResult = classifier.compute(
            recordNum=count+20000,
            patternNZ=activeCells,
            classification={
              "bucketIdx": bucketIdx,
              "actValue": event_value
            },
            learn=False,
            infer=True
          )
          classifierResult1 = classifier1.compute(
            recordNum=count,
            patternNZ=activeCells,
            classification={
              "bucketIdx": bucketIdx_3,
              "actValue": event_value_3
            },
            learn=True,
            infer=False
          )

          classifierResult2 = classifier2.compute(
            recordNum=count,
            patternNZ=activeCells,
            classification={
              "bucketIdx": bucketIdx_2,
              "actValue": event_value_2
            },
            learn=True,
            infer=False
          )
          learning_time_end = time()
          print "Time",(learning_time_end-learning_time)
      with open("out_sp.tmp", "wb") as f1:
        sp.writeToFile(f1)
      with open("out_tm.tmp", "wb") as f2:
        tm.writeToFile(f2)
      learning_time_end = time()
      print "Time", (learning_time_end - learning_time)
    if __name__ == "__main__":
      testWriteSp(2)

And get the errors like as :

Traceback (most recent call last):
  File "save.py", line 173, in <module>
    testWriteSp(2)
  File "save.py", line 169, in testWriteSp
    tm.writeToFile(f2)
  File "/home/japanes/calc/venv/local/lib/python2.7/site-packages/nupic/serializable.py", line 113, in writeToFile
    proto = schema.new_message()
AttributeError: 'NoneType' object has no attribute 'new_message'

Can you help me ?

Thanks a lot.

That is strange, seems like it should work. Can you try writing the tm to a file as soon as you create it, before you compute anything? Does it still throw the same error?

Yes, I got the same errors. I wrote file before loop for:

    def testWriteSp(numRecords):
      learning_time = time()
      with open("test3.csv", "r") as fin:
        reader = csv.reader(fin)
        headers = reader.next()
        reader.next()
        reader.next()
        with open("out_tm.tmp", "wb") as f2:
            tm.writeToFile(f2)
        for count, record in enumerate(reader):
          print "Count",count
          if count >= numRecords: break

And got errors :slight_smile:

 Traceback (most recent call last):
  File "save.py", line 173, in <module>
    testWriteSp(2)
  File "save.py", line 73, in testWriteSp
    tm.writeToFile(f2)
  File "/home/japanes/calc/venv/local/lib/python2.7/site-packages/nupic/serializable.py", line 113, in writeToFile
    proto = schema.new_message()
AttributeError: 'NoneType' object has no attribute 'new_message'

Thanks for your help. Maybe you have any ideas?

This script creates a SP and TM and saves them to file. Does it run without error for you?

from nupic.algorithms.spatial_pooler import SpatialPooler
from nupic.algorithms.temporal_memory import TemporalMemory

def testWriteSpTm():
  sp = SpatialPooler(
    inputDimensions=(400,),
    columnDimensions=(1024,),
    wrapAround=True
  )
  tm = TemporalMemory()
  with open("out_sp.tmp", "wb") as f1:
    sp.writeToFile(f1)
  with open("out_tm.tmp", "wb") as f2:
    tm.writeToFile(f2)


if __name__ == "__main__":
  testWriteSpTm()

  with open("out_sp.tmp", "wb") as f1:
    sp.writeToFile(f1)
  with open("out_tm.tmp", "wb") as f2:
    sp.writeToFile(f2)

In the fourth line you mean : tm.writeToFile(f2) ?
I will try it.

If I copy your code - all right. Bit I got error about tm.writeToFile(f2)

Thanks a lot.

Yes, thanks, I fixed it in the example. It runs fine for me.

I don’t quite understand you. Are you staying if you run my code it works without error? Or does the script give an error? If so, which one?

I got same error with your script

Assuming this is the error:

Traceback (most recent call last):
  File "save.py", line 173, in <module>
    testWriteSp(2)
  File "save.py", line 73, in testWriteSp
    tm.writeToFile(f2)
  File "/home/japanes/calc/venv/local/lib/python2.7/site-packages/nupic/serializable.py", line 113, in writeToFile
    proto = schema.new_message()
AttributeError: 'NoneType' object has no attribute 'new_message'

I see some other reports when searching for “new_message” in the forum. I’m not sure what is happening here. @scott do you?

No I don’t know. If we could reproduce in a Docker container then I could try debugging it.

Maybe I don’t update dependencies for using serialization ?

This is something I’d like to fix, but I just don’t know how. The script above works for me. If we could find other people who also have problems using the script that would help identify what part of your environment is problematic.

I’m not sure what you mean by updating dependencies for using serialization.

Ok . I will find error in my environment

Thanks for your help. I fixed errors, which we had discussed above in this theme, and I updated my OS. Nevertheless, now I have another problem: when I use my trained model, I have a very bad inference - it looks the model somehow loses most of the trained information (I use writeToFile and readFromFile function). What could be a reason for this result?