Numenta Research Meeting - August 3, 2020

In this meeting Subutai discusses three recent papers and models (OML, ANML, and Supermasks) on continuous learning. The models exploit sparsity, gating, and sparse sub-networks to achieve impressive results on some standard benchmarks. We discuss some of the relationships to HTM theory and neuroscience.

Papers discussed:

  1. Meta-Learning Representations for Continual Learning ([1905.12588] Meta-Learning Representations for Continual Learning)
    [1905.12588] Meta-Learning Representations for Continual Learning
  2. Learning to Continually Learn ([2002.09571] Learning to Continually Learn)
    [2002.09571] Learning to Continually Learn
  3. Supermasks in Superposition ([2006.14769] Supermasks in Superposition)
    [2006.14769] Supermasks in Superposition
    License
    Creative Commons Attribution license (reuse allowed)

The Continual AI group looked into the Supermasks in Superposition paper:
https://continualai.discourse.group/t/continualai-reading-group-supermasks-in-superposition/136