Rate coding... and learning

Continuing the discussion from Neural coding:Rate/Temporal coding vs. Sparse coding:

I was quite comfortable with the HTM idea that, although biological neurons (important note: some of them) have increased firing frequency with increased excitation (aka strength of input), we may safely envision a binary model where it is abstracted away.

And if we ever need to represent a scalar value, encoding it as @rhyolight explains in the HTM school vids or interactive tutorial would work.

But now I’ve been wondering: rate coding provides another dimension to the representable space, given same synaptic count and imho it could have a tremendous effect on two things:

  • learning speed
  • generalization abilities

Say you learn to catch a ball at some distance d . Dunno if the value of d would be encoded anywhere in a brain for this scenario, but I’m pretending it is, simply to make my point clear: Encode that value HTM-style, and it requires the network to be exposed to lots and lots (almost all of them, in fact) values of d to learn and wire to the correct arm-position to reach the ball at d . Now if, to the contrary, d was carried by the rate at which one or a few cells are firing, then, if any part of the arm movement should be close to a linear function of d , you only have to wire that relationship once, and it would “auto-generalize” to all potential rates… Even if the number of distinguishable rates are somewhat limited (10 or so), and even if you’d need two of them samples to learn about the correct slope to apply, it would still be a significant learn-speed advantage.

Your thoughts on this?

2 Likes

I think rate obviously codes something. It could be different in different contexts. I don’t think it is involved in distance, and perhaps not anything spatial, since we already have found other mechanisms for spatial representation that work very well.

For something like d in your example above. I think this distance must be represented in an egocentric reference frame like a vector between self and something. I think grid cells and perhaps displacement cells are involved.

1 Like

Thanks for coming by, Matt. Seeing you on twitch, answering me, was… interesting.
I hadn’t realized I was so hard to read :blush:

I wish someone would tell me when my english sounds weird… or when my sentences are too frenchy-heavy. That would allow me to improve… maybe.

Anyway back on topic, I know grid cells and the like provide all the required basis for encoding spatial measures, and that Numenta goes all in with them in 1000’s brains. And that it works so far. Yet learning faster and free generalization seemed cool enough bonuses, that both ANNs and mother nature may consider it.

But… yeah. Seems like grid cells it is! At this point.

1 Like

Please don’t mistake my miscommunication with your miscommunication. My Twitch streams are very “stream of consciousness”. I go fast and sometimes miss things.