Neural coding:Rate/Temporal coding vs. Sparse coding

By the way - while you state rather authoritatively that retinal cells don’t spike you may wish to reconsider that statement:
https://pdfs.semanticscholar.org/dfdd/2d810dadcd3d39c0bc9d55d2cfb3849a1083.pdf

https://www.pnas.org/content/94/10/5411

As you can see in these papers, the spike train encodes considerable information in the structure of the pulse spacing and phase/latency relative to other cell outputs.

Ok. I got it first-hand from Tomaso Poggio, figured he is pretty authoritative. Maybe that was about lateral connections within retina.
BTW, Joaquin Fuster talks about reverberation within ensembles (his cognits) do you know anything about that?

I have been following various lines on that since reading Grossberg and his ART models and things like synfire chains.

This is all conflated with the brainwave/timekeeping actions of the thalamus and the spreading of activity as touched on lightly in JHs talk today.

So - which particular aspect are you interested in?

The temporal aspect that we discussed. If higher-area ensembles are larger and wider, then they will most likely reverberate longer. Not always, but there will be greater variation in duration.

Link to a paper?

Sorry, no links, just quotes from “How to Create a Mind” by Ray Kurzweil, p. 86: A study of 25 visual and multi-modal cortical areas by Daniel Felleman found that “As they went up the neocortical hierarchy,… processing of patterns comprised larger spatial areas and involved longer time periods“.
Another study by Uri Hasson stated: “It is well established that neurons along the visual cortical pathways have increasingly larger spatial receptive field.” and found that “similar to cortical hierarchy of spatial receptive fields, there is a hierarchy of progressively longer temporal receptive windows”.

1 Like

Don’t you think it’s reasonable assumption that longer reverberation would be necessary to form and maintain larger ensembles?

I don’t think that is a given. I do think that the data stream is just that - a stream. The processing is guided with feedback from the higher levels acting as a filter but it is still a stream. I see the WHAT and WHERE streams as just that - streams that flow up the hierarchy to the association region(s).

There can be more stable representations as you ascend the hierarchy; this representation can be the stabilizing feedback. I like to think of the stream as “bunching up” as it ascends the hierarchy. I suppose that you could think of this as some sort of ensemble but I think that misses the essential peristaltic streaming nature of the processing.

The high level representations are the basic data interchange between the hubs.

What I like about this basic approach is I can see how it forms and develops from a “empty” structure to a fully trained one. I find the method outline in the " Deep Predictive Learning" particularly appealing in this regard. This development and self-organization is missing in many of the models I have looked at.

I don’t see a contradiction here. Yes, it’s a stream, but there is incremental filtering along the way. To overcome this filtering, representations must be increasingly stable / invariant, both spatially and temporally. That means they need larger receptive field, with feedback to maintain connections while the weights are trained.

You may wish to think about what you get with my hex-grids or Numenta’s 1000 brain lateral connections. Both are an inherently peristaltic streams without the usual crutch of fanning in or out of connections.

Both are compatible with maps cross-connecting or level skipping as the stream ascends the hierarchy. This gets your larger assemblies in a biological plausible way.

JH also touched on this in today’s talk.

I have become invested in the concept that the streams stay mostly parallel all the way up the hierarchy. So far it has been possible to cast common tasks into this model. Some of the solutions take a radical rethink of how the brain does things - it’s not at all the way that one might do it using a stored program computer. So far the biggest win has been how well this model solves the visual palimpsest problem - layers of image fragments combining into recognition of an object.

1 Like

I am going to build a neural network model according to my ideas. If anything interesting happens, I will put it here.

7 Likes