HTM and Reversibility

There are two threads on the forum at the moment that are about image processing.
In both cases, the OP is asking about going from SDRs back to the features that made up the SDR.

To the best of my knowledge, the generation of an SDR is a one-way process; it is not a reversible process.

With fully connected networks you can’t get back to the factors that a given SDR is using for the recognition that we have seen this sequence before.

HTM gurus - am I missings something?

Would adding topology help in at least narrowing the input to a region?


If I understood correctly, @rhyolight just mentioned that it could be done here :

If you have topology enabled, then the SP breaks up minicolumn RFs into local chunks. In this case, if you have separated your input features to match the SP topology, you might be able to do some decoding. But AFAIK we have not tested this.

Important point.


Something like lime could be used, except it would have to be modified to work temporally.

Another hypothetical option unfortunately.


In my demos for HTM.js, I configured it to grow apical connections from cells in the input space (prior to SP) back to the active cells in the TM layer (after TM). The learning function for these apical connections is identical to the TM algorithm. What this does is produce predictions in the input space (these may be unions in the case of ambiguity). These predicted SDRs are encoded in the same format of whatever encoder you are using to encode the original data.

If you were to combine this strategy with writing your encoder in a way that it has both an “encode” and “decode” function, then one could do a reverse function. This of course may not be possible for all encoders (such as word SDRs from semantic folding, for example). But may be applicable to certain cases.


BTW, has the ability to decode word SDRs (via the “terms” api), so that probably wasn’t the best example of where this strategy wouldn’t apply. I’m sure there are other encoders where decoding is not an option, though.

Also, in case anyone is planning to test this strategy, keep in mind that it relies on a cell being able to grow multiple distinct segments to connect it with many different contexts. It wouldn’t work with an HTM implementation that only models a single segment per cell (such as Etaler).


It depends on perspective, in the SP’s case a feature can be reduced to an active bit in the input space. This bit can be traced back from an active column, down to its dendrites and active synapses assuming introspection is allowed at the end of an SP iteration. So a high-level feature (in ML world ex. age, length, etc) would be a set of bits (low-level features in SP) however this set can only be partially recovered with respect to the high-level feature because the SP operates directly on bits and indirectly on high-level features (sorry for the lack of terms).

Not an HTM guru by the way just sharing my perspective.