By reading each row, I call model.resetSequenceStates() whenever I encounter any row having reset flag is ‘1’. By doing so, I wanted to prevent HTM to learn the transition from the end of previous sequence to the beginning of next sequence (from -6.8 to 0 in the example above).
However, I found that the HTM learns unwanted transition and predict 0 with appearance of -6.8 like the figure below (around the position with 216).
So I’m wondering if the behavior of resetSequenceStates() is different from what I have expected.
Is there any way to avoid this and to properly deal with the multiple sequences separated by reset flags? Any help will be appreciated.
I attached few lines only here.
The code is referring one of the “Gym” OPF example codes.
What do you mean by “manual”…?
Is there any other way for the model recognizes reset automatically inside in OPF?
Thank you.
with open(_INPUT_FILE_PATH, "rb") as fin:
reader = unicodecsv.reader(fin, encoding='utf-8')
headers = reader.next()
typeDef = np.array(reader.next())
headerInfo = dict(zip(headers, typeDef))
reader.next()
for i, record in enumerate(reader, start=1):
modelInput = dict(zip(headers, record))
....
# Timestamp
modelInput[u'Modified_Time'] = datetime.datetime.strptime(modelInput[u'Modified_Time'], "%Y-%m-%d %H:%M")
actualValue = modelInput[_PREDFIELD]
timestamp = modelInput[u'Modified_Time']
# Here I did like below..!
if modelInput[u'Reset'] == 1:
model.resetSequenceStates()
result = model.run(modelInput)
result = shifter.shift(result)
.....
Yes, there is an experiment runner framework, but I’ve never used it. You need a CSV input file to run swarms over datasets because it does use the experiment runner framework.
Regarding your code… have you tried moving the reset call to the end of the loop after running the model?
I have tried using bool but I found that it actually does not matter as long as I manually (by additional codes like attached) call the resetSequenceStates().
Is there any mechanism that the model itself recognizes the reset flags on data coming into OPF?
I’ll try calling the resetSequenceStates() after running the model, too.
I knew that the reset flags are recognized well in the swarming process.
But I don’t get the meaning of experiment runner framework. Could you provide relevant example codes or documents?