Unsupervised feature learning followed by supervised readout layer (paper)

I see Krotov and Hopfield have this paper out:
https://arxiv.org/abs/1806.10181
Where they combine both biologically plausible unsupervised feature learning and a supervised readout layer.

I also have multi-threaded numerical optimizer in Java that I am trying out:
https://github.com/S6Regen/Cisco
I’d like to try that code out on one of AMD’s new server boards, with 2 CPUs, 64 cores and 128 threads. Which is wishful thinking.

There is also a slew of papers on deep neural network (resnet,recurrent) viewed as ordinary differential equations:
https://arxiv.org/abs/1804.04272
https://arxiv.org/abs/1806.07366
Which fits in with chaos theory of view of compounding non-linearity that was discussed. Since many fractals are defined as differential equations.
Anyway I should be able to try out a simplified version as some stage.

2 Likes