Project : Full-layer V1 using HTM insights

And you are right :slight_smile:

It is. The ā€˜target’.

ā€˜Modellingā€™ā€¦ā€˜Checking’. Yup, precisely. I don’t see a computer model as a despicable end-product. It is an integral part of our R&D (or even theorization) toolbelt now, for modelling and checking. As much as a pen and paper for drawing boxes, arrows, and/or equations.

You do realize, that what I find the most interesting known-output of all, as far as V1 goes, is the output of the ā€˜learning’ function itself, right ?
aka the end-state of those cells and dendrites and synapses after exposition.

Assuming that you do… I don’t quite understand over what we’re disagreeing here.

  • If you think V1 formation is so complicated that it won’t work in isolation, then we’ll try to add parts of a hierarchy. I stated as much already. That endeavor could give us some evidence of this very requirement.
  • If you think V1 formation is so simple that any model would do, and thus we won’t get any insight reaching it, then… well at that point I don’t think it will be that easy. But right; it is some possible concern. If that turns out to be the case we can always turn the ā€˜probing’ part on its head and look for models which fail. Or strip ingredients one by one to get a clue about which are the necessary ones…

We’ll learn ā€˜something’ either way.

I don’t know how my coding skills are relevant to the discussion, since you did understand that I don’t want to hardcode V1-like output (or didn’t you ? the purpose is not a clever edge detector for the sake of it), but let a model learn online and see if its cells tune themselves towards edge detection and stuff.

Now, if your concern is that I can’t model anything before having a well-defined model in the first place, I’ll restate that ā€˜let a model learn online’ in the sentence above will more likely turn out to be an iterative ā€˜let several, many, theorized models, in turn, learn online’. And see which of these succeeds. I may already have some insights for the first tries, granted… but I’m not putting too much confidence in them anyway, and all these models (against which to ā€œcheck the plausibility of your theoretical ideasā€) could very well be dug out/refined/invented as we go.

ā€˜Invented’… Hacker-style :dark_sunglasses: since I’m no Einstein, sadly.

To conclude… I don’t know how V1 decision of forming ā€˜simple’ edge detection when exposed to visual stimulus is relevant to (A)GI. But I strongly bet that it is. Relevant. V1 is cortex. We both agreed on that, it seems. And I believe, that by witnessing concretely ā€˜how’ would V1 be driven to come to that particular choice, we’d gain insight into precisely that.
ā€œWhat stands as relevant info and/or coincidences to wire to, from an (A)GI’s substrate point of viewā€.
Quite the nut to crack if you ask me.