Feeding HTM in batch mode (as opposed to line by line)


Is there a possibility to feed the HTM machinery the whole table in advance as opposed to feeding in line by line in a ‘for row in csvReader’ loop?
In my experiment, it takes about 10 ms per iteration on 32GB Linux machine which makes a 3000 rows file run for good half a minute.
Will running in a batch mode make any performance difference?


No, there is no feature like you describe. HTM systems process temporal data streams step by step, not in batch. Also, 10ms per iteration seems like a good speed for an HTM system. How big is the structure? How many columns and cells per column? How big is the input space?


Thanks Matt.
I was trying out sets of 5,000-10,000 tuples (timestamp, lat, long).
I made sure the timestamps are evenly spaced and the lat, long -> are actually modified a little to overcome the fact that 1 lat degree is bigger that 1 long degree.
Grok didn’t work well at all so I tried with my own encoder.


You do not need to do this in the future as long as datetime is encoded in the input.