Apart from any potential usesfulness, this is an interesting acknowledgement of how insanely wasteful current AI is in its use of compute cycles and energy.
I still believe we can get away with just 32, maybe 16 bit signed integer math and lookup tables.
NNUE - the neural net behind stockfish - works with 8 bit int weights and 16 bit accumulated values.
It is so fast a GPU would slow it down.
On a test on a single raspberry pi core it evaluated 100k positions/second.
That’s only one of optimisations though.
Considering the whole neural net was ~20M parameters in that version, simply (or stupidly) feed forwarding it would have need that Pi core to run at ~2 trilion operations / second.
Thats the sort of situation where WTA activation sparsity would shine.
HTM-scheme uses 8 bits for permanence values, 24 bits for pre-synaptic cell id (so synapses are 4 bytes; with pre->post connection map total memory requirement is ~10b/synapse).