Numenta Director of ML Architecture Lawrence Spracklen gives an overview of the poster he presented at the SNN Workshop on July 8th and 9th, 2021. In this poster “How Can We Be So Slow? Realizing the Performance Benefits of Sparse Networks” by Lawrence Spracklen, Kevin Hunter and Subutai Ahmad, we present the techniques Numenta has developed to achieve a 100x inference task speedup from sparsity and discuss how many of the learnings could be applied to develop fast sparse networks on CPUs.
Link to poster and abstract: SNN Workshop 2021: How Can We Be So Slow? Realizing the Performance Benefits of Sparse Networks