Oops. @Gary_Gaulin, seeing your answer I guess I misunderstood what you referred to as being two dimensional. Having seen your in-code diagram of a retina slice, and by virtue of the top-down view requiring same “slices” of retina I believe ? I thought you were talking about that. The (polar?) 1Dness of your receptor, the 2Dness of the retinal diagram, and/or the 2Dness of the worldview.
In light of this I was answering about 3Dness of my required corresponding retinal diagram. But now I take it you meant either the 3Dness of the environment and perception, or even my strange sticking to binocular.
Well, as for the binocular obsession, maybe it could be simplified away. I don’t know. In any case, my rationale behind it is best addressed by this :
For the 3Dness of the visual environment, well… it comes from both the current workflow devised by SimLeek, which is a physical camera, or my initial synopsis as the output of a 3D rendering. I believe it’s in reach of current 3D rendering techniques to give believable texture, illuminations, and contrast properties at the “edges” of rendered objects. I also have some experience in this kind of things so, time considerations aside, it would even be in “my” reach.
I could go with totally-abstract 2D drawing of squares and balls as done for example in that paper, and give it a shot.
But I have doubt it could work for my purposes, as in:
… And I hope babies whose vision was developmentally studied were still more familiar with the look of their carrycot, of their mother’s face, of their toys or of the pace of the housecat, that they were familiar with the test screens.
Because, here’s the catch : I’m not really after 3D as a test case. I’m after 3D as training.
In fact I’m not waiting after comparing directly what we sense in 3D, to what V1 outputs, or mess with edge transforms, trying to see which matches where in the environment.
I’m waiting after training a V1 model from as-common-as-possible input, so that if V1 model then self-organizes, from its exposition to realistic visual stimuli, as a lab-testable edge-detector (here from abstract, 2D edgy stuff) , we’d know the whole model for that little patch of cortex is on the right track.
So that is maybe even more ambitious…
However, I’m not necessary imagining this as a one-man attempt :
Regards,
Guillaume