Are V1 transforms pre-wired or learned?

Hi Gary, thanks for the link. I shall give it a deeper look.
At first glance however (I knew I should have split that post somewhere), the question of how and why cells grow as they do, seems orthogonal to my last obsession main concern here.
Funnily I guess one of the primary material having sparkled my line of thoughts was found starting from one of your “works for science” videos, @Gary_Gaulin. It was the MIT course given a few years back about vision.

So please indulge me for one second, as it seems so important in my view :
As for the MIT course, I still haven’t viewed it all, but what strikes me already is the - not necessarily complete but - HUGE amount of knowledge we have about what the output from retina looks like, what the typical response from V1 looks like, coupled with the consistency of V1 self-learned organization, even across quite different mammal species.

So I’d say there is a very strong case here, for having a dream of an archetype to confront our cortical learning models against.
It may very well be the case (@bitking) that a larger hierarchy, or other parts of the brain, or even unforeseen mechanisms are necessary for our models to start self-organizing as V1 does, given same kind of input. Fair enough, we may be able to try and add those one by one, and more generally mess with all this until we get there.
But once we’re there… by virtue of cortical self-homogeneity, we’d be at hell of a good starting point for studying the various ways in which we can link / tweak / add other unforeseen mechanisms to clones of this little patch of brain so that we’re well on the path of functionally replicating other parts of the cortex, given what we know about their specifics.

If it turns out I’m the only one interested in this approach, I’d very much like to know why… but I’m unlikely to let that idea slip without a well-educated “dead end” objection.
So, am I solo on this ?