Application of HTM in today’s ML frameworks

Ah, I see there was a misunderstanding at the time. I’m sorry.
Was he saying SP makes sparse connections to the layer after?
I thought it was from the layer before(the input layer for the SP layer).
If not, I’m even more confused. :confused:
EDIT: Wouldn’t the connections turn out to be at least dense as the layer after anyway? I don’t think I get it.

Please be patient. My job is to explain it all to you, but I have to understand it first. :slight_smile:

4 Likes

The funny thing is that DL is actually moving toward a more “binary” activation networks, and it seems to start working lately. https://arxiv.org/abs/1812.11800v2 .

Thanks!

I’m trying to at least. Note the: “not exactly sure what I’m doing” comment. I tried to approximate sparsity enforcing with small convolutions, but I have another idea I want to try that’s more like some competitive attractor networks. Think: gravity bringing things together while electromagnetism (or dark energy) keeps it apart.

I don’t think I can do a global k-winners activation with local algorithms though, but I should be able to enforce a max.

2 Likes

We are working on a complete paper with code (weeks not months), you’re going to like it. I would love to see what sparse activations look like running when compared to dense activations. This is going to be a fun year. :nerd_face:

2 Likes

Looking forward to it! I’ll try to get those sparse activations working right then.

1 Like

There is a lot of confusion in this thread, so I am working hard to run these new models and understand them before we release this paper. I’ll have a video to support the paper coming soon that will further explain the model setup in the paper. So stay tuned!

In the meantime, keep in mind these 3 things we can use from Spatial Pooling to enforce sparsity in a neural network:

  • potential pools to enforce weight sparsity
  • k-winners to represent a global minicolumn competition (inhibition)
  • boosting to enforce homeostasis in the layer (must compute active duty cycles for this)

@rhyolight, what do you mean by noise tolerance? What kind of noise? Is this paper about image classification?

Sparsity seems to help quite a bit with additive noise. Random or structured. Yes, images. I’ll show some examples once the paper is out.

1 Like

Cool, I just submitted a paper on NN noise tolerance to IJCNN, so I’d be curious to look at your results. Where did you submit it to?

2 Likes

The paper is out:

6 Likes