That exploration and evaluation is a mechanism, just a higher-order one. Nothing happens without a mechanism.
And once all maps are populated, by the end of adolescence at the latest, the brain stops learning?
There always is a displacement of lower-predictive-value old memories. “Drive”, “want”: these anthropomorphic terms are misleading when we are talking about simple low-level mechanisms.
All that neurons are doing is recognising patterns: coincident inputs. Strong patterns propagate upstream and laterally (perhaps in @Bitking’s hexagons), while also suppressing downstream: predictable / low-additive-value future inputs. Weak / suppressed nodes invite replacement inputs from neighbouring downstream areas. I think your basic “pertinence” is downstream reinforcement of novelty vs. predictability, overlayed on upstream-detected input pattern strength. With similar lateral mechanisms of inhibition among same-source (expected coincidence) co-activated nodes, and reverberation among different-source (unexpected coincidence) co-activated nodes.
I believe some maps are pretty stable at some point (eg V1) and there should be something encouraging lower level to stabilize somewhat before higher ones, imho, but having them “populated” wasnt meant to represent the end of learning. Sorry for those sentences without a context. This was a mix of several concerns, and we were talking (with @Bitking) about evolution and “adding more maps” to the cortical graph.
This is beginning to address my current interrogation. But I don’t know yet what to make of it.
“All that neurons are doing” can be taken somewhat dismissively, like okay, let’s move on and seek elsewhere an explanation for our “intelligence” musings. Or it can be seen as something more fundamental, like, here also lies such explanations (and we did not draw that conclusion yet precisely because we’re too proud to place such “simple low-level” mechanism at this position).
It is becoming clear to me that coincidence detection is great, but it aint great alone. We need something to outfocus both obvious and fortuitous ones. STDP explains some implementation for coincidence detection, ok, got that covered, and now I’m looking for the missing part of the story: pertinence.
SOC seems to address that by a global mechanism enforcing criticality. And maybe that’s all there is to it.
You seem to address that by a mechanism of downstream inhibition. And maybe it fits that role entirely.
It seems you’re more advanced in your understanding of that downstream inhib loop than I am. Had I studied further, I could decide whether that loop is indeed well understood already and we need to move on, or if there’s more work to be done to understand how it comes to be.
And then, as for the “all that neurons are doing” part, be clever enough to discern such mechanism as fundamental (and “sufficient”, in Jame’s sense), or not.
Thanks. I think I have pretty good understanding of high-level function and architecture, but very fuzzy on computational level. No background or intense focus on neuroscience, my approach is not neuromorphic at all. I am trying to perform the same function: generalization, but via very different low-level mechanism. I am exploring neural stuff too, as a plan B, will try flesh-out details latter.