Hi All,
I just wanted to verify that I’m using the Algorithms API correctly to test the TM. I’m trying to make a 1:1 bare bones comparison with my own TM implementation, so I’m feeding in the following simple categorical sequence, repeated 3000 times:>
seq = [‘A’,‘B’,‘C’,‘D’,‘X’,‘B’,‘C’,‘Y’,‘A’,‘E’,‘F’,‘D’,‘X’,‘E’,‘F’,‘Y’]
I’m bypassing the SP to simplify the comparison, and I thought it shouldn’t really matter since the inputs are categories with no overlap. So I’m feeding the active bits from the CategoryEncoder right into the TM as the SP winning columns would normally be. Does this approach make sense?
What’s confusing me is that the NuPIC TM seems to be taking a very long time for the Anomaly Scores and the Number of Predictions Made to converge to 0 and 1 (respectively). This 16-letter sequence (with no noise), doesn’t seem to fully converge even after as many as 20,000 repeats, whereas my implementation does so after just ~400. Of course I suspect there’s something off in my code, since a 16-length sequence w/out noise should only take around 1600 repeats to converge, right?
Here is my code. I’m using only imported functions from NuPIC as shown in the Algorithms API Quickstart (except those for generating the data). If anyone would take a quick glance I’d be really grateful once again. Any mistakes you find or intuitions you’d have are most welcome!
Here are the result plots showing NuPIC and my TM’s Anomaly Scores and Precisions (number of predictions made). Both sets of Anomaly Scores and Precisions become steadily more sparse moving toward 0 and 1, though the NuPIC TM doesn’t quite seem to get there.
NuPIC TM:
My TM: