How to encode images and other visual data for HTM system

Please find the link for my GitHub repository, with a working example for encoding visual data ( images) as input to the HTM system. I was able to successfully encode handwritten digits and inputs to SP, using an autoencoder. Finally, train SP to find all related images which have to overlap SDR.

4 Likes

Great work!
I’m sure taking a DL latent vector and turning it into a binary vector to feed to HTM is discussed/practiced in this forum before.
(Like this one: Proof of concept: Trainable universal encoder architecture)
And I’m doing something similar at the moment as well.
I’ve actually added k-winner layer to the bottleneck of the network and also had a loss function to enforce sparsity over time to make it get along with HTM better.
I’ve used this tactic to encode English words to binary vectors:
https://colab.research.google.com/drive/1341rA9fQFwCcUUgKcTnZVcyA9UuqmBHY

I was thinking the same think with respect to NLP, with transformer architecture we would be able to encode, entire language corpus ( latent representation) as SP SDR or TP SDR. I came across this on github. https://github.com/alexyalunin/transformer-autoencoder.

2 Likes

Very interesting! NLP is definitely one of the under explored potentials that HTM could handle.
I wonder what an HTM system can do with properly encoded NLP data. :smiley:

Hi, could you provide some textual explanation for your work as it’s not easy to get the idea directly from code?

I’m also slightly interested in this topic right now, so I’d appreciate it! Thanks in advance :slight_smile:

you can think of the first part as stock standard Auto encoder, (https://blog.keras.io/building-autoencoders-in-keras.html ) the next part is encoding the output of the auto encoder to RDSE then finally feeding this to SP.

2 Likes