Are V1 transforms pre-wired or learned?

Thanks for that reply, @SimLeek. I’ll have to get a closer look at PyGP, nice visuals in an case :wink: I’ll certainly answer about that once I get a better grasp on it, probably on the other topic

For the moment, I wanted to try to address the other interesting point you raise about the learning part.
I may be wrong, but labelling “what happens” as learning/growth/decay/genetically-encoded-behaviour/cell-intelligence/chemistry/self-organization/nature/godshand is mostly a non-issue in my view.
What is important to me (as a wannabe AGI-building-contributor), is that :

  • V1 is not yet organized at birth, and organizes in a very consistent way a few week afterwards. To my mind “not at birth” and “yes a few week after” is strong evidence for an organization which is dependent from visual input. Otherwise, why not having that already organized in utero ? It may be a development time coincidence but I somehow doubt it.
  • V1 is part of neocortex, that very substrate which is hypothesized by HTM and others to operate based on universal properties or universal algorithm, and able to somehow “make sense” of whatever input it gets. So an additional clue to the above point is, why choose “all-purpose, able-to-learn, expensive neocortex” for it, if its is to finally map the function independently of input ? And an additional benefit for us is : we have here a well-presented, simple “example” of how that elusive stuff will universally learn, when submitted typical daylight, airborne vision as presented by retinal/geniculate output.

The fact that this organization is somehow dependent on its input is enough for me to use the word “learning”. But I do not care about the word. I really care however about understanding how the neocortex organization depends on its input.

Hope that makes sense. I’ll be happy to discuss if there are things you think I’m overlooking here.
Take care :slight_smile:
Guillaume