'Cognitive Computing' using HTM

A lot of marketing terms have been thrown around recently in Artificial Intelligence, one of them being ‘Cognitive Computing’, which implies that the computing works on the similar principles of the cortex. As far as we’re aware HTM is the only thing that comes anywhere close to that. Perhaps HTM could give that term a bit of justice.

The basis of computing is the transformation of input data to output data. Within HTM that means the transformation of input SDR to output SDR. Or in the case of the spatial pooler - DR to SDR. However, after much exposure to the domain, when many patterns have been learned and predicted, there are now many ‘functions’ that can transform SDRs from one state to another. In other words, there are many learned temporal transformations. Given pattern A, pattern B will be predicted. That single-step transformation could be applied to patterns similar to A, novel patterns that have not been learned yet.

For a simple illustration - if the HTM learned the temporal patterns of a falling ball/circle (example) then with learning off, if a triangle instead of a ball were to be fed-forward then HTM will still predict a falling triangle (if the triangle is the same scale for overlap). Eventually if a variety of different falling shapes were learned then eventually any novel shape can be fed in and an approximate prediction of its fall will occur. This can work because its very likely that a novel shape will overlap with some union of previously learned shapes/patterns - it’ll share various similarities between different patterns. A novel input SDR will produce a novel output SDR.

Other similar simple examples of transformation could include shapes that expand in scale, rotate on an axis, squeeze, skew, etc. But these transformations could apply to anything, anything that the encoders throw at it. Maybe the transformation of encoded numbers can be learned. i.e, an input union SDR of two numbers can eventually learn to output an invariant SDR representation of their multiplied output. Essentially the HTM has learned a function - a computational function.

The interesting thing here is that the learning is passive. The HTM is just doing what it does best - it models the domain online. So there’s no error score to compare to, therefore no local optima to suffer from.

This is just a theory. However I wish to try and prove it in code. Has anyone tried anything similar to this?

1 Like