Evading adversarial attacks with sparse representations and its implications

So around last summer, I took part in an online science fair where I presented some research regarding the benefits of sparse representations in neural networks. Using this paper and nupic.torch as a starting point, I was able to draw some jaw-dropping (at least I found it shocking) results that point towards the conclusion that sparse representations are very important for producing intelligent systems. While the potential benefits of sparse representations against adversarial attacks was discussed in the aforementioned paper, it wasn’t experimentally verified and nor am I aware of similar results in other papers. Additionally, I also put the networks to the test against occluded images to gain insight into whether sparse representations do indeed learn meaningful representations. I am pleased to introduce my research paper, and extension to the paper listed above (it is also my first paper, so please go easy on me :slight_smile:). I am looking forward to hearing your thoughts.


Pretty cool!

I have some critiques:

  • A general comment on your writing: you should focus on “generalization” and on “resistance to adversarial attacks” since these are the big contributions which you’re making.
  • I think your abstract is too long and not directly to the point. Several of the sentences would be better in the introduction, where you can explain the background and motivation for this work.
  • I can’t read the confusions matrixes because the images are too small and compressed, and they have a lot of illegible text on them. I think it would be better if you just color-coding them?
  • I don’t know why you plot the “loss” function (figure 3), especially when right below it you plot the test accuracy which is IMO much more informative. Are the loss functions even really comparable between sparse and dense networks? What information does that image convey? What do you want the readers to think after looking at the loss function? Is this common or expected in machine learning papers?
  • You probably don’t need to go so in-depth about the street view house numbers (SVHN) dataset, especially since you couldn’t get either of the AI’s to work for it.

Overall, I think this is good work with important results.


Thank you for your feedback! I will try and take these into consideration when writing another paper in the future.