Numenta Research Meeting - August 3, 2020

In this meeting Subutai discusses three recent papers and models (OML, ANML, and Supermasks) on continuous learning. The models exploit sparsity, gating, and sparse sub-networks to achieve impressive results on some standard benchmarks. We discuss some of the relationships to HTM theory and neuroscience.

Papers discussed:

  1. Meta-Learning Representations for Continual Learning (
  2. Learning to Continually Learn (
  3. Supermasks in Superposition (
    Creative Commons Attribution license (reuse allowed)

The Continual AI group looked into the Supermasks in Superposition paper: