In this meeting Subutai discusses three recent papers and models (OML, ANML, and Supermasks) on continuous learning. The models exploit sparsity, gating, and sparse sub-networks to achieve impressive results on some standard benchmarks. We discuss some of the relationships to HTM theory and neuroscience.
- Meta-Learning Representations for Continual Learning (http://arxiv.org/abs/1905.12588)
- Learning to Continually Learn (http://arxiv.org/abs/2002.09571)
- Supermasks in Superposition (http://arxiv.org/abs/2006.14769)
Creative Commons Attribution license (reuse allowed)