The old parts of the brain use three layer neural networks of small granular cells with fan-in/fan-out of five. The new part of the brain neocortex uses five/six (depends who you ask) layer neural networks of pyramnidal cells with fan-in/fan-out of ten thousand.
The old part handles simple perception action arcs using one layer learning. The latter handles higher level representation, introspection, modeling all with nuanced context. The old uses a simple one layer learning rule. The new uses the apical dendrite to provide a feedback learning channel that can span one or two layers (more?).
I think you are over simplifying. There are lots of older parts of your brain doing lots of different things, using lots of different types of neurons. You can’t really sum up everything in a few sentences very easily.
Yes, I agree. Do you know of any reference with a list of these older functions. Gary Marcus at NYU makes the point that we need to seed an AI with a priori functionality. The grid cell system seems like a prime example.
I believe he is asking what the lizard brain does outside of cortical functions.
Note: There continues to be widespread confusion that the grid activation pattern is in the limbic system (hippocampus) and not in the cortex. The two are closely associated but the grid activation pattern is clearly located in the cortex.
The hippocampus has place cells - not the same thing at all.
This is a very weak start on an answer to Ed’s question:
It all depends on how you define “intelligence”. I’m sure humans will end up adding more functionality to our intelligence models based upon other parts of the brain aside from neocortex. But whether it is required for “intelligence” is just quibbling IMO.
I am invested in this model: “Another way of looking at all this is that the lower brain structures “drags” the cortex along as is does what a lizard brain might do and the cortex responds by projecting back learned shaping influences to improve this behavior.”
A baby has instinctual behavior to look at faces and pair that with comfort and nursing. (priming things like visual search and social behavior)
I could go on at length but I will just jump right to the summation: without these training and activation functions, the cortex would never learn anything useful nor be driven to do anything by attention/orienting and instinctual drives coupled into the forebrain.
Yes, this makes sense to me. So the folks that say take a big neural network and we can train it to do anything in a reasonable amount of time may be wrong. The hints/pointers/organization provided by the lizard brain makes learning much easier for the neocortex.
I would like to have a list of these functions. I am not a neuroscientist and do not want to read the thousands of papers and books it would take for me to make such a list. It would be great if the Allen Institute Brain Science folks wrote a short book on the subject.
I’ve always assumed this would be how an implementation of HTM would learn, for the reasons outlined above. I never really think about it in terms of biology though.
For example, my HTM attaches itself to another program and gains access to its entire memory layout and function API. Everything it theoretically needs to build a perfect sensorimotor model of any virtual environment. I can program it to do virtually anything.
Right now it’s building sensorimotor models of the filesystem where files are sensory features and the parent directory is the allocentric location. The pooling layer represents different parts of the filesystem. The “subcortical” program under the HTM throws a SIGSTOP that the HTM handles as an object reset signal to create a new object rep in the pooling layer.
Just thought I’d share some cool things you can do with the OP’s idea.
Input to the neocortex (and any part of the brain, really) is always SDRs. It’s SDRs “all the way down”. SDRs are just the basic communications medium of the brain.
Sensory organs translate phenomenon in the outside world (reality) into SDRs because neurons are stimulated and SDR signals are created and sent to the brain in axon bundles. You have to remember that the SDR space is extremely highly dimensional, and has the capacity to represent unlimited information coming from the senses. The senses encode reality in different ways (touch SDR input is very different from vision sensory input). The neocortex is trying to take the high-dimensional sensory inputs and make sense of them all as it moves its sensors around the world.
This is a human, without all the pulpy flesh required for mobility:
Those nerve bundles are like fiber optics cables, each one presenting huge SDRs to the brain for processing. The SDRs coming from the somatic senses (millions of nerves all over your skin) don’t get a lot of preprocessing like vision, which does a LOT of pre-processing in the retina before it gets to your brain. Same with your cochlea, lots of stuff happening there to translate material vibrations of reality into nerve impulses that actually have meaning and can be processed by the brain.