Are V1 transforms pre-wired or learned?

Hi there.
I’m pursuing some basic idea that I’d like to model and see how it fits. But it rests on one specific assumption, so maybe people here could help me find out right away if it’s wrong :

Is it known whether our V1 transforms are pre-wired or learnt ? In other terms, have we found cells in V1 from newborn babies (monkeys?) that would fire on specific edges and orientations, or are they known to respond to those stimuli only on (week?-) older individuals ?

Thanks :slight_smile:

[Edit] Okay, so after refining my initial infructuous research, I guess we could safely bet that it’s learnt :
Development of human visual function
orientation-selectivity : ~6 weeks functional ? 6 months mature
spatial-frequency response : from ~12 weeks onwards ?
motion-direction-selectivity : first for low speed, from ~12 weeks onwards ?

Sorry for asking too quickly.

5 Likes

Oh hey Matt, sorry for the moderation work, but this was a newbie question, right ? :blush:

I may develop the idea that I have. When I get back home tonight maybe. I’m not really sure if this should be something I try out on my own, or if it’s been done already, or if, to the contrary, I should wonder and ask why it has not be.
But maybe this part should go to a modeling/hacking/computational other thread on the forum ?

So; I had a desire to try a model for some patch of cortex, mixing some of HTM insights with other stuff and see what comes out of it. I was going to write about that model in more detail for you guys to comment…

And, well, what would have been nice - at least in my mind - apart from the model, is that I could imagine some training inputs, and after learning it would maybe be testable against V1, because…
Now wait a minute. Better ask if V1 actually learns, before anything.
Ah yes. It does.
So, yeah, what would have been nice, apart from the model, is that its learni…
Now WAIT a minute

What I realize now is that the model I had in mind is not that important.
What seems important is these facts brought together :

  • We all seem to have faith here, on the hypothesis that the neocortical homogeneity denotes a single universal cortical algorithm, which we’d like to understand and replicate.
  • Our great amount of knowledge about the structure of the cortical column however, does not readily give much information on “how it comes to work”, for lack of simple, mappable-to-clear-semantic parts (no grandmother cell anywhere…)
  • But it turns out we have almost-as-good-as-grandma here…

V1

  • Most cells in V1 seem to have clear-cut, by now well-known semantics : edge detectors, orientation-matchers and whatnots
  • Our visual perception of the universe is also quite well known, to the point we can computationally generate a “rendering” of it. Even more so if limited to the boundaries of a carrycot.
  • So we have here both the input and a conceptually simple, highly topological organization target, in reach after a relatively limited learning time. One which is consistently testable against simple, expected semantics…
  • Moreover, I believe we could get away with this structuring by only modelling a subpart of V1 around foveal input and - shall the need arise - the mapping to a subset of higher level V2 is still highly topological and seemingly well known. I believe sister level ocular motor stuff is also.
  • V1 being in all ways part of this great homogeneous cortical ensemble, if we manage to reverse-engineer the way it comes to “understand” its specific input, then we have the blueprint for it all. Right ?

Now,
Since it looks like an obvious research path in my view,
Since this kind of knowledge about V1 is years or decades old, and that of cortex similarity and universality hypothesis is half a century old.
Since I believe I’m not the Nobel type,
Since there are many otherwise known things I do not know, but which I’d still be glad to learn about,

Where exactly is the part I’m missing here, for why computational models of V1 were not already extensively tried, messed with, and connected in all possible ways until getting to an actual AGI ?
(Other than @bitking -kind of concerns that there is much more to AGI than neocortex, which is a position I acknowledge, but I believe is not the primary view here)

Thanks for posing the question and answering it. How much of behavior is innate (learned over generations by evolution) and how much is based on unsupervised learning? If the bulk of intelligence is due to unsupervised learning then that is a very unexplored area.

Extracting the specifically “neocortical” role from the more general concept of “behavior”, my (layman) feeling here is that the genetically encoded part could be broadly restricted to : << hey neuron type N1138, you’re allowed to grow your aborescent stuff towards roughly there >>.
Then, unsupervised learning mechanisms may very well fully take over.

To my mind, your ancestry genome was selected based on the fact that letting grow N1138 here and N3811 there would consistently and quite inevitably lead, all other things being equal (and even if fire-and-forget from there on), to a cortical self-organization such as they get an efficient visual processing, giving them and their progeny better chances in the game of life.

Is it ? I thought this was one of the premises for Jeff Hawkins and team.

Developing on that idea of unsupervised visual learning, this paper here Deep Predictive Learning: A Comprehensive Model of Three Visual Streams (thanks again for the link @bitking) was very nice. Here too they’d place emphasis on the fundamental “goal” of cortex as a predictive framework (even if they do not model it like HTM), together with time considerations.

Their results are actually quite good and insightful. So it seems there are indeed people toying with same kind of ideas. However they seem to treat V1 as pure input (IIRC from Gabor filters and the like) so their results won’t match my specific concerns. As a matter of fact, still, they hinted possible answers to the question which confuses me here :
(1) in their view the full hierarchy is key (for being any good at predicting anything)
(2) by virtue of V1 being a terminal element in the loop, they’d actually state it may not learn exactly as other parts.

However, the current HTM model may function with less of a hierarchical impact (on questions related to learning and predictions), and since the orientated-edge filters were first discovered in the V1 of a cat, I believe this does not make the case for the necessity of the whole complex, full-blown human cortical hierarchy to start seeing interesting results there.

So I’m still quite intrigued as to where this path of studying unsupervised learning for V1 models (and extensively testing results against the expected structure of visual filters) could lead us, unless I’m overlooking something fundamental.

1 Like

You might like this classic:

http://www.basic.northwestern.edu/g-buehler/summary.htm

Hi Gary, thanks for the link. I shall give it a deeper look.
At first glance however (I knew I should have split that post somewhere), the question of how and why cells grow as they do, seems orthogonal to my last obsession main concern here.
Funnily I guess one of the primary material having sparkled my line of thoughts was found starting from one of your “works for science” videos, @Gary_Gaulin. It was the MIT course given a few years back about vision.

So please indulge me for one second, as it seems so important in my view :
As for the MIT course, I still haven’t viewed it all, but what strikes me already is the - not necessarily complete but - HUGE amount of knowledge we have about what the output from retina looks like, what the typical response from V1 looks like, coupled with the consistency of V1 self-learned organization, even across quite different mammal species.

So I’d say there is a very strong case here, for having a dream of an archetype to confront our cortical learning models against.
It may very well be the case (@bitking) that a larger hierarchy, or other parts of the brain, or even unforeseen mechanisms are necessary for our models to start self-organizing as V1 does, given same kind of input. Fair enough, we may be able to try and add those one by one, and more generally mess with all this until we get there.
But once we’re there… by virtue of cortical self-homogeneity, we’d be at hell of a good starting point for studying the various ways in which we can link / tweak / add other unforeseen mechanisms to clones of this little patch of brain so that we’re well on the path of functionally replicating other parts of the cortex, given what we know about their specifics.

If it turns out I’m the only one interested in this approach, I’d very much like to know why… but I’m unlikely to let that idea slip without a well-educated “dead end” objection.
So, am I solo on this ?

I’m currently working on replicating the function of the v1 in code, and I’d have to say the initial layers are most likely not “learned”, because I’m able to get half-decent orientations and end-stops without anything but hard-coded values. They may adapt our decay, similar to nupic, but I feel that’s more due to their nature as cells than necessity.

I feel like we’re confusing learning with just growing or decaying here. Most body parts, like muscles, decay after long periods of no use and grow if they’re strained enough. Some of this growth uses outside forces, like plants grow based on gravity. Then again, at what point would we call it learning and not normal biological functions?

1 Like

Thanks for that reply, @SimLeek. I’ll have to get a closer look at PyGP, nice visuals in an case :wink: I’ll certainly answer about that once I get a better grasp on it, probably on the other topic

For the moment, I wanted to try to address the other interesting point you raise about the learning part.
I may be wrong, but labelling “what happens” as learning/growth/decay/genetically-encoded-behaviour/cell-intelligence/chemistry/self-organization/nature/godshand is mostly a non-issue in my view.
What is important to me (as a wannabe AGI-building-contributor), is that :

  • V1 is not yet organized at birth, and organizes in a very consistent way a few week afterwards. To my mind “not at birth” and “yes a few week after” is strong evidence for an organization which is dependent from visual input. Otherwise, why not having that already organized in utero ? It may be a development time coincidence but I somehow doubt it.
  • V1 is part of neocortex, that very substrate which is hypothesized by HTM and others to operate based on universal properties or universal algorithm, and able to somehow “make sense” of whatever input it gets. So an additional clue to the above point is, why choose “all-purpose, able-to-learn, expensive neocortex” for it, if its is to finally map the function independently of input ? And an additional benefit for us is : we have here a well-presented, simple “example” of how that elusive stuff will universally learn, when submitted typical daylight, airborne vision as presented by retinal/geniculate output.

The fact that this organization is somehow dependent on its input is enough for me to use the word “learning”. But I do not care about the word. I really care however about understanding how the neocortex organization depends on its input.

Hope that makes sense. I’ll be happy to discuss if there are things you think I’m overlooking here.
Take care :slight_smile:
Guillaume

I’m wondering whether this paper makes sense to you:

I’m trying to understand what the sometimes spiraling waves are for. It might be a clue that this thread needs.

Not pretending having myself understood what they are for, those questions about waves are for me quite reminiscent of the following video (and from that point onward I hereby vow never to thank @bitking for a link again) :

But I believe you watched it already, Gary :wink:

Could you please develop on that intuition, even if not totally clear yet ?

This wave phenomenon… I have no clue what to make of it. But for what it’s worth, its existence is well kept somewhere in my mind. As I acknowledge in the model-oriented companion post, there is in my view a very real possibility that taking it into account is necessary for self-organizing as V1 does.
For my part, I’d like to know exactly that. What are the requirements for it, using our biological knowledge and our best guesses at a computational model.
Shall those waves (and swirls !) be a primary ingredient, this would be a pity for most of our current, early AGI attempts, as this would make it hard to compute efficiently, but that would lay out some pretty relevant basis at least.

IIRC your nice model of the rat avoiding cyclic shock areas does use wave propagation in some way ? To be honest I’d be glad to bring together any goodwill for it. I believe today that throwing all we have at V1 is one of our best bets. At least for starting to understand anything of the neocortical sheet.

That wave phenomenon reminds me a lot of stable structures in Conway’s game of life. In that game, there are stable sets of ‘cells’ because the rules are time based, and there are ‘gliders’ that can travel in different directions. Entire structures can be made because of simple time and space rules. [link] [vid]

As for what it’s used for, I’d like to know that too. As an engineer, I’d say it seems ripe for all kinds of uses.

My understanding of the wave in sleep patterns is that the cortex and hippocampus are somehow stimulated into mutual action based on the day’s events; I see it as being driven by the difference between the two.

The hippocampus has been learning the difference between what is being sensed and what the cortex recognizes. This delta is what has to be transferred.

The wave traveling across the neural sheet drives then two into a frenzy of activity where the spike-dependent timing drives learning FROM the hippocampus TO the cortex. When the two are in sync the local excitation is reduced and the hippocampus is ready to learn some new things.

Since you hate these links I will be careful to avoid posting anything that may cause you to learn anything new for you; maybe youtube videos of cute kittens instead?

Yup. Although that video also hinted at important wave mechanisms and lab recordings in the awake state as well.
About recognizing a “difference” I don’t know the role of anything yet. Hippo ? Pulvinar ? other “Attention” stuff ? Or nobody and emergent everywhere as in HTM prediction mechanism ? Bets are off.

Seems like it, and is kinda scary from a computational point of view. We’d be meteorologists at best.

about link stuff : Hey Mark. In advance sorry for not being able to perceive here if you’re joking or not. I hoped it would be clear that this was for me a tongue-in-cheek way of saying “thanks” once again, while “lamenting” about my need to do that so often. Sorry if there was too much room for other interpretations. Although I do like kittens. Please erase this from memory if you were joking too. I hereby vow not to give a try at written jokes in english any more.
Regards,
Guillaume

Going on intuition: something maybe worth following up on is that the other mode the network goes into which over time can mess up navigation but on its own holds a memory of past events in its vector pattern. It’s the kind of thing that could be used for all at once showing what it did all day, or where order is saved can be unwound back to starting state for a rough idea of what it did over time. I just posted additional information and important update here:

I’m not sure what it means but the network is storing memories in both mentioned modes, although only one seems to eventually cause navigation problems from becoming overwhelmed with information. I don’t want to send you following false leads, but this is something that at least needs as they say “further study”.

I had these pictures here worrying me, sending me on a tangent track for a while :


(Found there http://paulbourke.net/papers/visualneuro/)

Don’t really know at this point how those stains have to be interpreted. If it turns out they make the case for a LGN-to-V1 axonal mapping which is already pre-organized towards the goal of having V1 detect edges, it would jeopardize my whole approach.
I guess.
If to the contrary they only show a self-organized topographic dendritic connectome of V1 cells towards their a posteriori pertinent input, all the better.