Okay, I brought this quote because it just occured to me, it shows an astonishing insight into some limitations of current ML training strategies, or even our vision of brain as a pure pattern-matcher, vs. a goal of reaching AGI.
Any insight here to be found, on how to avoid this ?
This reminds me of this thread and others like it, which touch on the question of the neocortex’s purpose. Is understanding the neocortex alone enough to achieve AGI?
In my mind, I imagine the neocortex as an engine for modelling the world to extreme levels of abstraction and using that model to make predictions. While this makes it more than just a “pattern recognizer”, I think that alone still isn’t enough for AGI. You must match this engine with other systems which feed it with motivations/needs and leverage it to achieve goals. I often ask the question “what makes the system chose one action over another, or to take any action at all for that matter?”
you can actually experience the inner lizard by doing a simple experiment , try holding your breath , this is will be done by your conscious brain (Neocortex) until a point comes where the reptilian brain takes over and gets your breathing again. even though you consciously didn’t do it , (P.S : don’t kill yourself)