I was thinking through @mraptor 's thoughts on reinforcement learning here:
Anyway, it occurred to me that a lot of mysteries* are resolved and complexity eliminated if different neurons’ predictive states are allowed to decay differently. Anywhere on the time scale from microseconds to hours.
*mysteries to me. I don’t want to speak for anyone else, least of all TBT theory itself, in defining what is and isn’t a mystery.
In the computational models of HTM that I’ve seen, the predictive state decays (or leads to activation) in precisely one time step, but it’s not a huge leap - given neurons are analog elements - to imagine the rate at which the cell’s voltage returns to normal (quiescent) level could be variable from neuron to neuron.
And if I run with that conjecture, it seems plausible that that decay rate could itself be adjusted in a learning process.
And finally, it seems further plausible that different dendritic synapses could affect a cell’s voltage by different magnitudes - basically a “weighting factor”, so it may require multiple weaker dendritic spikes to get a neuron into a predictive state.
Does any of this line up with current biological observations or existing Numenta theory?
Thank you for any thoughts.