Application of HTM in today’s ML frameworks


#41

Ah, I see there was a misunderstanding at the time. I’m sorry.
Was he saying SP makes sparse connections to the layer after?
I thought it was from the layer before(the input layer for the SP layer).
If not, I’m even more confused. :confused:
EDIT: Wouldn’t the connections turn out to be at least dense as the layer after anyway? I don’t think I get it.


#42

Please be patient. My job is to explain it all to you, but I have to understand it first. :slight_smile:


#43

The funny thing is that DL is actually moving toward a more “binary” activation networks, and it seems to start working lately. https://arxiv.org/abs/1812.11800v2 .


#44

Thanks!

I’m trying to at least. Note the: “not exactly sure what I’m doing” comment. I tried to approximate sparsity enforcing with small convolutions, but I have another idea I want to try that’s more like some competitive attractor networks. Think: gravity bringing things together while electromagnetism (or dark energy) keeps it apart.

I don’t think I can do a global k-winners activation with local algorithms though, but I should be able to enforce a max.


I updated a library so it can display tensors
#45

We are working on a complete paper with code (weeks not months), you’re going to like it. I would love to see what sparse activations look like running when compared to dense activations. This is going to be a fun year. :nerd_face:


#46

Looking forward to it! I’ll try to get those sparse activations working right then.


#47

There is a lot of confusion in this thread, so I am working hard to run these new models and understand them before we release this paper. I’ll have a video to support the paper coming soon that will further explain the model setup in the paper. So stay tuned!

In the meantime, keep in mind these 3 things we can use from Spatial Pooling to enforce sparsity in a neural network:

  • potential pools to enforce weight sparsity
  • k-winners to represent a global minicolumn competition (inhibition)
  • boosting to enforce homeostasis in the layer (must compute active duty cycles for this)

#48

@rhyolight, what do you mean by noise tolerance? What kind of noise? Is this paper about image classification?


#49

Sparsity seems to help quite a bit with additive noise. Random or structured. Yes, images. I’ll show some examples once the paper is out.


#50

Cool, I just submitted a paper on NN noise tolerance to IJCNN, so I’d be curious to look at your results. Where did you submit it to?