Deep neural networks need boosting

I’ve been looking at going back to the basics of neural networks before I try some harder ideas, and I added some debugging tools to a simple 16x16 fully connected autoencoder that ran on all of unicode (16x16 image -> 1000 neurons -> 16x16 image that should match original):

I applied k-winners of 2% to mimic global inhibition, but it wasn’t quite enough. The bottom right window held a good sparse representation, and resulted in similar performance to the non-sparse representation (possibly slightly better), but the same neurons were active nearly all the time. Boosting is needed if I want a neural network to learn more than a few mostly different things.

Are there any libraries for boosting in pytorch? Or should I try to make something for it?

2 Likes

First of all, outstanding work! It tells us that we are on the right track if you are trying to do this the same way. See this recent k-winners pytorch module in htmresearch @lscheinkman recently committed, which contains boosting logic. We will eventually isolate this into an installable pytorch module, but you can probably figure out how to use this code yourself in the meantime.

1 Like

Thanks! Also, I looked through the k-winners module. I like that it has boosting, but I think the boosting should be separate from the k-winners function.

I think it should be:

activations -> boosting -> k-winners

rather than:

activations -> boosting&k-winners

Then, the backward() function can be used to get which boosted neuron won or lost.

Boosting and k-winners might be intertwined enough it could be good to have a version that has boosting&k-winners, but I personally like keeping things separate as much as possible. I think I’ll try and separate them out.

Also, a few areas for exploration: I think the inhibitory synapses may be useful for ensuring the variance of the output layer for some input with respect to all inputs matches the variance of the input layer with respect to all inputs. And different learning systems for activated/de-activated neurons after k-winners could mimic boosting while also training deactivated neurons to seek out different input. I’ll have to compare those with the boosting+k-winners module.

Edit: actually, it seems like boosting should subclass pytorch’s Optimizer class.

Looks like I got a lot of stuff to try.

1 Like

Wait a minute…

It was training against a randomized tensor. Of course it didn’t learn.

Here we go:

However, there are still a lot of neurons in the middle tensors that are always turned off, so boosting should help those.

Edit: Actually, Adam with AMSGrad looks promising. I’m going to look into that.

Here’s what it’s doing:

Results aren’t as impressive individually, but I can see similar unicode neurons being used to guess new ones, which is exactly what I want.

3 Likes

So essentially are you making a DL version of SP?
Because that’s nearly identical to what I made.
I’ve tried a bunch of variations of boosting but I settled on having another cost function to enforce sparsity by backpropagation.
Somehow affecting activations of columns makes the network to perform worse than doing so.
It would be more biologically plausible to affect activations though.
P.S. it feels weird to see Korean alphabet from here. Just saying.

2 Likes