In this meeting Subutai discusses three recent papers and models (OML, ANML, and Supermasks) on continuous learning. The models exploit sparsity, gating, and sparse sub-networks to achieve impressive results on some standard benchmarks. We discuss some of the relationships to HTM theory and neuroscience.
Papers discussed:
- Meta-Learning Representations for Continual Learning ([1905.12588] Meta-Learning Representations for Continual Learning)
[1905.12588] Meta-Learning Representations for Continual Learning - Learning to Continually Learn ([2002.09571] Learning to Continually Learn)
[2002.09571] Learning to Continually Learn - Supermasks in Superposition ([2006.14769] Supermasks in Superposition)
[2006.14769] Supermasks in Superposition
License
Creative Commons Attribution license (reuse allowed)