As you can see in these papers, the spike train encodes considerable information in the structure of the pulse spacing and phase/latency relative to other cell outputs.
Ok. I got it first-hand from Tomaso Poggio, figured he is pretty authoritative. Maybe that was about lateral connections within retina.
BTW, Joaquin Fuster talks about reverberation within ensembles (his cognits) do you know anything about that?
The temporal aspect that we discussed. If higher-area ensembles are larger and wider, then they will most likely reverberate longer. Not always, but there will be greater variation in duration.
Sorry, no links, just quotes from “How to Create a Mind” by Ray Kurzweil, p. 86: A study of 25 visual and multi-modal cortical areas by Daniel Felleman found that “As they went up the neocortical hierarchy,… processing of patterns comprised larger spatial areas and involved longer time periods“.
Another study by Uri Hasson stated: “It is well established that neurons along the visual cortical pathways have increasingly larger spatial receptive field.” and found that “similar to cortical hierarchy of spatial receptive fields, there is a hierarchy of progressively longer temporal receptive windows”.
I don’t think that is a given. I do think that the data stream is just that - a stream. The processing is guided with feedback from the higher levels acting as a filter but it is still a stream. I see the WHAT and WHERE streams as just that - streams that flow up the hierarchy to the association region(s).
There can be more stable representations as you ascend the hierarchy; this representation can be the stabilizing feedback. I like to think of the stream as “bunching up” as it ascends the hierarchy. I suppose that you could think of this as some sort of ensemble but I think that misses the essential peristaltic streaming nature of the processing.
The high level representations are the basic data interchange between the hubs.
What I like about this basic approach is I can see how it forms and develops from a “empty” structure to a fully trained one. I find the method outline in the " Deep Predictive Learning" particularly appealing in this regard. This development and self-organization is missing in many of the models I have looked at.
I don’t see a contradiction here. Yes, it’s a stream, but there is incremental filtering along the way. To overcome this filtering, representations must be increasingly stable / invariant, both spatially and temporally. That means they need larger receptive field, with feedback to maintain connections while the weights are trained.
You may wish to think about what you get with my hex-grids or Numenta’s 1000 brain lateral connections. Both are an inherently peristaltic streams without the usual crutch of fanning in or out of connections.
Both are compatible with maps cross-connecting or level skipping as the stream ascends the hierarchy. This gets your larger assemblies in a biological plausible way.
JH also touched on this in today’s talk.
I have become invested in the concept that the streams stay mostly parallel all the way up the hierarchy. So far it has been possible to cast common tasks into this model. Some of the solutions take a radical rethink of how the brain does things - it’s not at all the way that one might do it using a stored program computer. So far the biggest win has been how well this model solves the visual palimpsest problem - layers of image fragments combining into recognition of an object.