I’m new to HTM. I saw Siraj’s new video explaining how HTM works this morning. After finding some resources detailing how HTM works. It seems that HTM has trouble a dealing with dense data (HTM uses SDR).
So I wrote a special fully-connected layer that reserves dense data as input (ordinary tensors used in deep learning) and trains(and operate) itself the same way a Spatial Pooling layer in HTM does. - By comparing inputs to it’s weights and updating weights by looking at the activation state of each input.
To test if my layer works. I connected my layer to a fully-connected layer to produce a classifier(the FC layer is trained using backprpoergation). I can get my classier to classify MNIST digits correctly around 40% after traning.
Is this idea of doing HTM using dense data worth anything? Do anyone has any idea on how could I improve my design?
Here is the code. It is implemented using my own deep learning library. (available here)
Hi Marty! Thanks for joining our community. You have some misconceptions about HTM. I’ll try to direct you to some resources that can help you understand better.
SDRs are very important. HTM is a theory of intelligence in the brain. SDRs represent neuronal connectivity, and this activity is sparse in the brain. The sparsity is important. For more info, see this video.
We are operating in very high dimensional spaces. When representations are sparse, the are easier to compare with each other to derive similarity. Sparsity is a requirement of intelligence, as we understand it in the brain. Making it dense will break everything.
Honestly, I don’t think it will work at all. I think density breaks HTM. But I’ve been wrong before.
Anyway thanks for the work you put into this (although the gist you gave is broken).
So you are encoding images into SDRs in order to feed them into the Spatial Pooler? If so that makes sense. You’ve just created an encoder. I think I understand what you’ve done, and a lot of other people have done the same with the SP. You have to understand the temporal aspect of HTM as well. Temporal data over time, and the ability to memorize sequences of complex spatial data in a way that is generalizable… that’s the power.
I started out by trying to solve the problem of encoding complex data into SDRs. (as mentioned in ep. 6). So I decided I’ll implement a SP that accepts dense inputs so no encoder is needed. The SP will deal with how to encode the data by it self.
I was planing to stack DSPs(Dense SPs) together and it should behave as normal SPs. And theoretically I can replace all layers in HTM with dense versions of them and it should work and also encode spike timing as a bonus.
But currently I’m just making a PoC to proof that DSPs can work, And made my DSP output SDRs because I can’t get DSP to converge enough for the following FC layer to do anything useful.
Are similar methods used to encode images? I didn’t find anything when I’m searching on Google.
I think maybe you’re thinking of creating an autoencoder, which has been talked about before here?
So will a “Dense SP” output a dense representation? If so, no TM will work on it. Also I don’t understand what you mean by stacking. Data -> encoding -> SP -> TM -> classification. There is no stacking in this model until you start talking about layers and columns.
At this point I’m pretty sure you’re talking about some type of autoencoder that can convert any dense representation into a sparse one, before sending data to the SP. That’s not a “Dense SP”, it is just an encoder. Do you understand why I say that?