Interpretation vs Representation

If “representation” is on one side of the coin, I would suggest that “interpretation” is on the other side. But I haven’t seen as much modeling work done on the interpretation problem, as efforts spent on the representation problem.

I guess it’s left as application concerns rather than engineering concerns? E.g. the “Hot Gym Example” make interpretation in form of anomaly scores, but it stops there. That’s far from explaining our daily decision making with our brain/mind.

There are expert systems, BI (business intelligence) systems etc. run on computers for aid of decision making, but none of them by far takes enough inspiration from the way the brain does it.

Please share anything, that I should look at, for systematic / innovative ways getting SDRs interpreted?


I think even the representation part doesn’t have that much work done. We are still hand-coding the input SDRs for SPs in HTM unless I’ve missed something HTM doesn’t “learn the representation” yet.


On the short run, I think a simple ML regressor can also predict the expected scalar value, which is a bit more than just an anomaly score.

On the long/general run, I guess there should be a way to define, represent and account for meaning , besides “information” into AI/ML algorithms