I’ve been watching a very intelligent youtube video about adversarial examples and deep neural networks.

https://youtu.be/CIfsB_EYsVI

Obviously the mammalian brain processes information in a rather different way, I guess by doing extensive unsupervised feature learning first before attempting to classify.

Also I have been doing some html5 practice and have some WHT information that I may add to over the next few days. I can’t say it works in any browser other than pale moon 'cos I didn’t use jquery.

http://md2020.eu5.org/wht1.html

Also Uber have an interesting article on measuring the complexity of problems posed to deep neural networks:

https://eng.uber.com/intrinsic-dimension/

Uber have brought on board some very creative and capable talent recently.

I think the “fastfood” algorithm is more complicated than necessary just to avoid having to cite a hobbyist. I would say google themselves use the faster version and leave the competition with a slow thing.

Interesting. I missed that one.

Understanding Generative Adversarial Networks:

In the Uber paper they use dimension increasing random projections:

https://arxiv.org/abs/1804.08838

If you are willing to use evolution you could try Fourier synthesis for the dimension increase. That would give locally repeating structure that might work well with natural images that also have local structure.

You could use the FFT. That would force you to use a lot of parameters though , rather than compactly representing the pattern.

You could use sums of a.sin(x.t+b) but then you are faced with tons of calls to the time consuming sin() function.

Instead you might try using this quadrature oscillator algorithm:

https://groups.google.com/forum/#!topic/comp.dsp/GFAzbora6bE

You get more than you need because you get a sin and a cos output at the same time, I’m sure you can set the phase initially so you only need one of sin() or cos(), unless you want to draw circles in 2d or something like that.

Anyway the Uber paper shows you can achieve very high levels of compression for deep neural networks.

I think with a dimension increasing random random projections you can achieve relatively more precision is some of the dimensions at the expensive of others.

The “at the expensive of others” would explain why they are not able to completely equal training on the full set of weights. I’m sure there are a few ways around that problem, like doing a second correction round.

I see I mentioned it before:

https://randomprojectionai.blogspot.com/2018/02/neural-network-weight-sharing-using.html

However I don’t wish to take any of Uber’s thunder away.