We’ve reached to P. Cisek and may be able to get some illustrations after all.
In the meantime, I’ve started to read the most excellent book “The Evolution of Memory Systems, Ancestors, Anatomy, and Adaptations” by Murray, Wise and Graham.
https://books.google.fr/books/about/The_Evolution_of_Memory_Systems.html?id=rcpLDQAAQBAJ&redir_esc=y
Really a very nice read so far, and I’d advise anyone to have a look.
I’m frustrated by a missing puzzle piece for my understanding, however.
In their description of an evolution line, they’d present (with a terminology I won’t embrace yet if you didn’t read the book) some basic universal capacity (as in, almost any-animal-with-NN across all lineages) as a first component of learning (or memory).
Somehow they tie this first component to pavlovian findings. I’m not disputing the claim, however for it to be possible at all, in some of the examples cited, it seems to me that there should be some form of… “retainment” (don’t want to use “memory” there yet) of a sensory condition, long enough to tie it to a “future” (how long, btw?) valued outcome.
Some STM, if you wish.
Being here on this forum, next best thing beyond such conceptual knowledge is… I’m mostly interested in how to model it, so it’s quite important to me that we’d be able to describe such retainment system in very very basal - and basic - NNs.
Yet the only pavlovian experiment on a “small and simple” being + neurological result I found so far seems to involve the wiring of a sensory input with a (sadly) concomitant internal representation:
https://www.sciencedirect.com/science/article/pii/S0960982219303872
“Sadly” cuz, whatever the internal representation, I’d have no problem imagining wiring it to “current sensory input”.
…
Wiring it to some “past (even recent past) sensory input” is what eludes me.
Any idea on this, anyone ?
Recall we’re speaking here of very primitive NNs, so if that idea could just… not-include HC, that would be neat.