Brave new world // Machine Learning


I don’t know why it came to mind, but there…

Brave new world, Aldous Huxley, 1932

1932 !

Okay, I brought this quote because it just occured to me, it shows an astonishing insight into some limitations of current ML training strategies, or even our vision of brain as a pure pattern-matcher, vs. a goal of reaching AGI.

Any insight here to be found, on how to avoid this ?


This reminds me of this thread and others like it, which touch on the question of the neocortex’s purpose. Is understanding the neocortex alone enough to achieve AGI?

In my mind, I imagine the neocortex as an engine for modelling the world to extreme levels of abstraction and using that model to make predictions. While this makes it more than just a “pattern recognizer”, I think that alone still isn’t enough for AGI. You must match this engine with other systems which feed it with motivations/needs and leverage it to achieve goals. I often ask the question “what makes the system chose one action over another, or to take any action at all for that matter?”


The inner lizard!




Although, in light of recent developments in comparative studies, this inner lizard thing ought to be called the inner gnathostome or something :stuck_out_tongue:

1 Like

Too much too soon: I am still trying to get my head around “Éminence grise” for the cortex.


you can actually experience the inner lizard by doing a simple experiment , try holding your breath , this is will be done by your conscious brain (Neocortex) until a point comes where the reptilian brain takes over and gets your breathing again. even though you consciously didn’t do it , (P.S : don’t kill yourself) :slight_smile:


reminds me a bit of x86 privilege levels, where the reptile is ring 0 (kernel) and the neocortex is ring 3 (user space) :grinning: