The general idea of Spatial&Temporal pooler is to sample the input space.
The problem with this is it requires the Output neurons to “extend” synapses to ~80% of the INPUT neurons, which should by biologically infeasible, right ?
Could a local rule as described in this paper be used instead ?
How will it look like for HTM neurons ?
Learning Invariance from Transformation Sequences
The solution proposed here is a modified Hebbian rule in which the
modification of the synaptic strength at time step t is proportional not to
the pre- and post-synaptic activity, but instead to the presynaptic activity
(5) and to an average value, a trace of the postsynaptic activity (5).
A second, decay term is added in order to keep the weight vector bounded:
… formulas …
A similar trace mechanism has been proposed by Klopf (1982) and
used in models of classical conditioning by Sutton and Barto (1981). A
trace is a running average of the activation of the unit, which has the effect
that activity at one moment will influence learning at a later moment.
This temporal low-pass filtering of the activity embodies the assumption
that the desired features are stable in the environment.
As the trace depends on the activity of only one unit, the modified rule is still local.
One possibility is that such a trace is implemented in a biological neuron
by a chemical concentration that follows cell activity.