Numenta Research Meeting - August 3, 2020

In this meeting Subutai discusses three recent papers and models (OML, ANML, and Supermasks) on continuous learning. The models exploit sparsity, gating, and sparse sub-networks to achieve impressive results on some standard benchmarks. We discuss some of the relationships to HTM theory and neuroscience.

Papers discussed:

  1. Meta-Learning Representations for Continual Learning (http://arxiv.org/abs/1905.12588)
    https://arxiv.org/abs/1905.12588
  2. Learning to Continually Learn (http://arxiv.org/abs/2002.09571)
    https://arxiv.org/abs/2002.09571
  3. Supermasks in Superposition (http://arxiv.org/abs/2006.14769)
    https://arxiv.org/abs/2006.14769
    License
    Creative Commons Attribution license (reuse allowed)

The Continual AI group looked into the Supermasks in Superposition paper:
https://continualai.discourse.group/t/continualai-reading-group-supermasks-in-superposition/136