Continuing the discussion from Neural coding:Rate/Temporal coding vs. Sparse coding:
I was quite comfortable with the HTM idea that, although biological neurons (important note: some of them) have increased firing frequency with increased excitation (aka strength of input), we may safely envision a binary model where it is abstracted away.
And if we ever need to represent a scalar value, encoding it as @rhyolight explains in the HTM school vids or interactive tutorial would work.
But now I’ve been wondering: rate coding provides another dimension to the representable space, given same synaptic count and imho it could have a tremendous effect on two things:
- learning speed
- generalization abilities
Say you learn to catch a ball at some distance d . Dunno if the value of d would be encoded anywhere in a brain for this scenario, but I’m pretending it is, simply to make my point clear: Encode that value HTM-style, and it requires the network to be exposed to lots and lots (almost all of them, in fact) values of d to learn and wire to the correct arm-position to reach the ball at d . Now if, to the contrary, d was carried by the rate at which one or a few cells are firing, then, if any part of the arm movement should be close to a linear function of d , you only have to wire that relationship once, and it would “auto-generalize” to all potential rates… Even if the number of distinguishable rates are somewhat limited (10 or so), and even if you’d need two of them samples to learn about the correct slope to apply, it would still be a significant learn-speed advantage.
Your thoughts on this?