How Can We Be So Dense? The Benefits of Using Highly Sparse Representations

Hi, I am very interested for this paper and it’s good results on noisy data and I would like to implement it by myself. However, I am having problems with implementing boosting and updating duty cycles.
For now, I put initial value of duty cycle equal to \frac{k}{|y^l|}.
In each iteration, I calculate tensor of output shape activity how much each neuron was active in a batch and take average over it and then update duty cycle as: d = (1-\alpha)d + \alpha*activity.
I probably have misunderstood what is exactly done here, so is anyone willing to share code or snippet of code if you have it in pytorch?
I would really like to use approach from this paper in other computer vision tasks, like feature extraction for clustering and autoencoders.

The duty cycle is initialized with zeros and updated on every training iteration using this formula:

dutyCycle = \frac{dutyCycle \times \left( period - batchSize \right) + newValue}{period}

You can find all the code used in the paper in the nupic.torch repository. The specific code updating duty cycle is part of the KWinners class.

3 Likes