Project : Full-layer V1 using HTM insights

And you are right :slight_smile:

It is. The ā€˜targetā€™.

ā€˜Modellingā€™ā€¦ā€˜Checkingā€™. Yup, precisely. I donā€™t see a computer model as a despicable end-product. It is an integral part of our R&D (or even theorization) toolbelt now, for modelling and checking. As much as a pen and paper for drawing boxes, arrows, and/or equations.

You do realize, that what I find the most interesting known-output of all, as far as V1 goes, is the output of the ā€˜learningā€™ function itself, right ?
aka the end-state of those cells and dendrites and synapses after exposition.

Assuming that you doā€¦ I donā€™t quite understand over what weā€™re disagreeing here.

  • If you think V1 formation is so complicated that it wonā€™t work in isolation, then weā€™ll try to add parts of a hierarchy. I stated as much already. That endeavor could give us some evidence of this very requirement.
  • If you think V1 formation is so simple that any model would do, and thus we wonā€™t get any insight reaching it, thenā€¦ well at that point I donā€™t think it will be that easy. But right; it is some possible concern. If that turns out to be the case we can always turn the ā€˜probingā€™ part on its head and look for models which fail. Or strip ingredients one by one to get a clue about which are the necessary onesā€¦

Weā€™ll learn ā€˜somethingā€™ either way.

I donā€™t know how my coding skills are relevant to the discussion, since you did understand that I donā€™t want to hardcode V1-like output (or didnā€™t you ? the purpose is not a clever edge detector for the sake of it), but let a model learn online and see if its cells tune themselves towards edge detection and stuff.

Now, if your concern is that I canā€™t model anything before having a well-defined model in the first place, Iā€™ll restate that ā€˜let a model learn onlineā€™ in the sentence above will more likely turn out to be an iterative ā€˜let several, many, theorized models, in turn, learn onlineā€™. And see which of these succeeds. I may already have some insights for the first tries, grantedā€¦ but Iā€™m not putting too much confidence in them anyway, and all these models (against which to ā€œcheck the plausibility of your theoretical ideasā€) could very well be dug out/refined/invented as we go.

ā€˜Inventedā€™ā€¦ Hacker-style :dark_sunglasses: since Iā€™m no Einstein, sadly.

To concludeā€¦ I donā€™t know how V1 decision of forming ā€˜simpleā€™ edge detection when exposed to visual stimulus is relevant to (A)GI. But I strongly bet that it is. Relevant. V1 is cortex. We both agreed on that, it seems. And I believe, that by witnessing concretely ā€˜howā€™ would V1 be driven to come to that particular choice, weā€™d gain insight into precisely that.
ā€œWhat stands as relevant info and/or coincidences to wire to, from an (A)GIā€™s substrate point of viewā€.
Quite the nut to crack if you ask me.