Long short-term memory and learning-to-learn in networks of spiking neurons

Since y’all are interested in sparsity, I came across this 2018 paper from Wolfgang Maass (I’ve been on a Maass kick recently). They reference “DEEP R” from a prior paper, which gives this spiking model a ~1% boost in accuracy (still not as good as regular LSTM. But really close!)

Paper:
Neuromorphic Long short-term memory and learning-to-learn in networks of spiking neurons

Notes:
Sequential MNIST SoTA w/ spiking. Apply BPTT on neurons with “Adaptation” - track B(t), a moving spike threshold on LIF neuron model. Not all neurons adapt. Single layer.

Results:
non-spiking LSTM=98.5%,
LIF=63%,
LSNN(ours)=93%,
DEEP R LSNN(ours)=94.7%.