Does the brain memorize or not - proof?


#1

Hi there, I’ve got a question probably related to Neuroscience/Biology or just simply a philosophical one. I’m not an expert in either of these fields including HTM so please bear with me.

Here’s my question, is there any kind (biological/math) evidence/proof/theory that the brain is not memorizing given its capability of storing data (the enormous amount of neural connections and their continuous values and their combinations)?

By memorizing I meant, storing patterns encoded using an advanced (probably unknown) algorithm and that at a later time they are retrieved back using an advanced (probably unknown) and fast operation.

cheers


#2

I don’t have a problem in the world assuming that the cortex is an extreme learning machine.

It may come as a surprise to you but HTM does much of the memory functions you call out as advanced - right now - without waiting for some future technology.

You have to put it in the correct relationship with the sub-cortical structures; these lead the cortex around like a tame puppy. That relationship points the cortex at the right attention points to make scanning and scene digestion to occur. The emotional weighting of the scanning drives decision - arguably another learned behavior.

But yes, an extreme learning machine.


#3

Oh - and we have some pretty interesting one-shot learning and hierarchical processing models floating around this forum. HTM is a good place to start.


#4

Hi @Bitking thank you for your response. My background is in computer science so I didn’t quite understand the third paragraph from your reply. Could you please elaborate more on extreme learning machine? Am I correct to assume that you somehow agree that the brain is memorizing?

One of the reasons I ask this question is that because I am quite curious about why AI/ML practitioners out there always equate Learning to Generalization. I know that Generalization is a bit convincing as it presents more artificial intelligence and there is Math that backs it up, at least for now. But what if biologically/computationally the cortex is really just a preferential and volatile data memory, like an ant’s trail perhaps, and these memories are simply the permutations of neuron connections? There must be a proof (likely I’m ignorant about) that the cortex is trying to generalize when it learns because all of todays ML algorithms (e.g. gradient descent) tend to focus on this mindset rather than focusing on memorization, which will probably involve more Information Theory (e.g. encoding, decoding, compression, etc), surprisingly though the HTM has many of these techniques.


#5

Hmmm. Sort of a mathy background?
Let’s try this. HTM learns on single presentation with unsupervised live learning. It can learn a lot. As neural networks go that’s outstanding performance. The basic SDR theory is well documented in the Numenta papers location. What I got from it is that dendrites are capable of encoding a ridiculously large number of features. And connections between these features.
http://arxiv.org/abs/1503.07469

So as the world streams in it is continuously parsed based on prior learning. You build up an internal model to match the external perception. If they don’t match you are “surprised” and learn this new pattern. Delta coding! Everything you learn is in terms of what you have learned before.
So you are seeking novelty you are really trying to experience orthogonal experiences in each of your sensory streams.
This forming an internal model thing is something that should be based on observed function.

It’s not like a film strip but it encodes what a critter might need to know with about what it’s seen before.

I don’t know how to respond to the claim that generalization is the end product to be desired. I see this as part of the descriptive functions. Forming an internal model is just another description of learning. Which we do well.


#6

Sort of. Thanks for the explanation, this is very informative.


#7

Yeh, Krotov is combining convolution with dense associative memory to get better generalization. I think for many problems if you have suitable pre-processing to give say elements of rotation, scaling and translation invariance and massive amounts of memory (as the human brain has) that probably is as effective as current deep neural networks. Especially if you use error correcting associative memory.
https://researcher.watson.ibm.com/researcher/view.php?person=ibm-krotov
I would rather hand the general problem over to evolution to solve but I am having trouble integrating a controller deep neural network with associative memory. I don’t want to preordain too much structure but it seems I will have to provide more than I want. Also I’m intending to shift more toward HTML 5 software and less on unpaid work.
Especially as I don’t have up-to-date hardware to write specialized code for, which makes things less interesting as I’m not pushing the envelope.