Are V1 transforms pre-wired or learned?

Oh hey Matt, sorry for the moderation work, but this was a newbie question, right ? :blush:

I may develop the idea that I have. When I get back home tonight maybe. I’m not really sure if this should be something I try out on my own, or if it’s been done already, or if, to the contrary, I should wonder and ask why it has not be.
But maybe this part should go to a modeling/hacking/computational other thread on the forum ?

So; I had a desire to try a model for some patch of cortex, mixing some of HTM insights with other stuff and see what comes out of it. I was going to write about that model in more detail for you guys to comment…

And, well, what would have been nice - at least in my mind - apart from the model, is that I could imagine some training inputs, and after learning it would maybe be testable against V1, because…
Now wait a minute. Better ask if V1 actually learns, before anything.
Ah yes. It does.
So, yeah, what would have been nice, apart from the model, is that its learni…
Now WAIT a minute

What I realize now is that the model I had in mind is not that important.
What seems important is these facts brought together :

  • We all seem to have faith here, on the hypothesis that the neocortical homogeneity denotes a single universal cortical algorithm, which we’d like to understand and replicate.
  • Our great amount of knowledge about the structure of the cortical column however, does not readily give much information on “how it comes to work”, for lack of simple, mappable-to-clear-semantic parts (no grandmother cell anywhere…)
  • But it turns out we have almost-as-good-as-grandma here…

V1

  • Most cells in V1 seem to have clear-cut, by now well-known semantics : edge detectors, orientation-matchers and whatnots
  • Our visual perception of the universe is also quite well known, to the point we can computationally generate a “rendering” of it. Even more so if limited to the boundaries of a carrycot.
  • So we have here both the input and a conceptually simple, highly topological organization target, in reach after a relatively limited learning time. One which is consistently testable against simple, expected semantics…
  • Moreover, I believe we could get away with this structuring by only modelling a subpart of V1 around foveal input and - shall the need arise - the mapping to a subset of higher level V2 is still highly topological and seemingly well known. I believe sister level ocular motor stuff is also.
  • V1 being in all ways part of this great homogeneous cortical ensemble, if we manage to reverse-engineer the way it comes to “understand” its specific input, then we have the blueprint for it all. Right ?

Now,
Since it looks like an obvious research path in my view,
Since this kind of knowledge about V1 is years or decades old, and that of cortex similarity and universality hypothesis is half a century old.
Since I believe I’m not the Nobel type,
Since there are many otherwise known things I do not know, but which I’d still be glad to learn about,

Where exactly is the part I’m missing here, for why computational models of V1 were not already extensively tried, messed with, and connected in all possible ways until getting to an actual AGI ?
(Other than @bitking -kind of concerns that there is much more to AGI than neocortex, which is a position I acknowledge, but I believe is not the primary view here)