Automatic encoder

Good day, has anyone experimented with a method for automatically encoding HTM inputs? I would like to know and hopefully use it and not reinvent the wheel.

AFAIK, Spatial Pooler inputs are encoded by hand-coded encoders (e.g. scalar, distributed, categorical, etc). Was there any attempt or success in automating the part of the encoding where the actual encoding of semantics is learned?

1 Like

An automatic anything encoder sounds like what an AGI would like to have.

PS
It probably doesn’t count but a basic random projection NN makes up a decent encoder. New type of inputs? add a random projector.

And there is a quite decent (and well studied) equivalent for arbitrary time series too - an echo state network or liquid state machines for spiking networks. Whoever haven’t heard of these, you should check out reservoir computing in general.

2 Likes

Agree, I didn’t mean “automatic anything” by the way. I thought for example, instead of a hand-written encoder for a scalar value, a model/algorithm is used to learn a good scalar encoding that aligns (not strictly) with the HTM SDR encoding criteria.

While writing this, I thought of an autoencoder. Do you think an autoencoder can be trained specifically for a particular domain so it can be used to encode inputs semantically, and similar to a hand-written HTM input encoder would encode inputs? I’ve never tried this and I’d like to know if someone has already implemented or something similar.

Nope, I will have a look, thanks.

1 Like

I’m not sure if training would provide a “good” one. Optimum representations of the same value might be different for different contexts/problem types.

Random projections are related to flyhash encoding
A while ago I did some tests with MNIST, and the same classifier (HTM Core included) on the random projected SDR got very close results to a SpatialPooler encoding of the same size & sparsity (e.g. 100/2000 bit SDR).

For numerical scalars I tested self-designed CycleEncoder and VarCycleEncoder - they-re mentioned here in forum if you search for… Numenta’s RMSE is pretty … randomized too,

But all have (even if few) parameters to set, are not automatic.

The main issue with SDR encoding is that you want for two similar values to have a good overlap and for dissimilar ones to have a lesser overlap. That’s it.
You can have either

  • have an algorithm to produce different encodings and evolutionary select the encoding parameters that provides best results
  • Think well and handcraft/program/train a proper encoder.

An autoencoder falls between the two above and might provide a good result within its training dataset “realm” but is not universal either.

PS regarding the “good overlap” above - that is problem specific, or even task specific. Dynamically changing encoding for e.g. weight within a data stream might be tricky.

1 Like