Humans take years to learn certain skills as babies/kids but once a certain level of maturity is reached, it doesn’t always take a long time to learn new things. It might take a single showing, maybe two… of course some things still take a long time to learn for some people.
So how does rapid learning work? Does Numenta have a theory for this? I looked but couldn’t seem to find anything on it.
Best to ask the question about animals. A magpie takes a long time (in magpie years) to learn to feed itself, but if you give food to an adult magpie just once, it will likely be back for more.
Isn’t that what memory is about? Making links between new sensory inputs and learned models of how the world works?
No that’s not what I meant. It usually doesn’t take 10s or 100s of “training sessions” to learn new facts, like someone’s name, the name of a new city you travel to etc, while it takes many iterations to teach DL and HTM models new things.
Well, sometimes it does, other times it doesn’t. I was wondering if there is any theory about those times when it’s possible to learn very rapidly.
That’s because biological evolution did not selected for function approximation while ML/AI models use that as selecting the “best” model. You won’t hear much of an algorithm that reaches 89% accuracy on MNIST after “seeing” a couple hundred training samples if it can’t surpass 99% after being exposed hundreds of times to the full 60000 digit training dataset.
It doesn’t need to be adult. Chicken hatchlings learn quite fast to forage. It’s true that unlike most birds, they are not mouth fed by their parents so they have to figure out what is food and where is it quite fast.
I recall Jeff Hawkins speculating in a research meeting something about silent synapses in the hippocampus. I forget what it was for, maybe one-shot learning or short term memory. The synapses already exist, but they’re inactive (in HTM terms, their permanence values are too low).
The hippocampus can rapidly learn information. Throughout the day, you accumulate memories. While you sleep, the hippocampus replays the information, teaching it to the cortex.
It helps to already understand how things work and reuse that knowledge.
Agreed, but you don’t really need to know any anatomy for that. It’s easy enough to demonstrate multiple layers of learning/memory in human subjects. I don’t recall any animal experiments.
And sleep just isn’t that simple. There is no need for a ‘replay’ as such, which implies a time element. It’s more like a cyclical recataloguing and reinforcement. I think that can be demonstrated in animals.
After we grow up, it doesn’t take much input or informations to actually learn something new because the foundation or basic information was already learned. However, if we are trying to learn something new in a different genre that was completely new like - a chef trying to learn ml. In that time rapid learning was no more. The cs student can rapidly learn ml.
There are experiments where the hippocampus replays sequences of place cell activity during sleep. Also some awake I think, also backwards and stuff. It’s complicated like everything with neuroscience. There could easily be other things going on in sleep to remember stuff, of course.
Everything is on a graph. So we move along the nodes of the graph. Since the structure of the graph is homogenous, moving left then right has the same abstract meaning everywhere on it. Since the atoms of knowledge are the same for all knowledge, we just need to find a new way to move along this already made graph in order to rapidly learn something new.
Just an uneducated guess, “rapid learning” would take some formed circuit and tune/augment that to accommodate the new trick. Likely teachers would use metaphors/analogies for the students to learn faster.
So babies/kids just lack enough formed circuits for that.
And though adults “rapidly” learn things “similar-enough”, they tend to reject things they’ve never seen in their life time.
Not sure there needs to be a theory apart from the fact HTM machinery is a fast learner by design.
It takes whatever input arrives at t1 and after t2, (a small part of) the network is set to predict whatever input arrived at t2.
And, besides that, “input at previous time step” from a cell’s perspective can be not only actual input but also state of any other cell within the network. Or just neighboring cells.
You can call it speculative learning, it speculates any arbitrary sequence in input will repeat, and whatever sequence actually repeats it becomes … learned time pattern.
My view on that is that we dont learn new tasks the same way when we are developing vs one-shoting.
in my view, our young brains are probably learning a “virtual machine” which is able to represent goals/actions/facts kinda like “data/software”, after a certain stage of development when learning new stuff we probably are just modifying the “raw data” and very little if any circuitry.