I just watched Jeff Hawkins - Human Brain Project Keynote Screencast and near the end, Jeff said,
“We should be able to build brains that are really tuned for certain types of problems, they never get tired and they’re really good at it.”
I’ve heard this before, that AI never sleeps. But that idea has always bothered me, and now, I’m making a fuss about it ![]()
I think a general AI build like our brains must sleep. I think there’s something about sleep that must be required by the informational structure of our intelligence. In other words, I don’t think we sleep merely because we need to do maintenance on the hardware of the body or brain, rather I think we do maintenance on our software as well - maintenance that changes it’s operation and therefore necessitates the loss of consciousness.
My intuitive theory is that there’s something about sleep that’s informationally fundamental; that it’s not only a result of the particular cyclical circumstance that biological life evolved in (with light and dark cycles), but is somehow an information theoretic requirement of complex, distributed models.
It seems strange that virtually all animals sleep if it were not required to facilitate the maintenance of their minds. Surely it would be an evolutionary advantage to adapt the ability not to sleep, at least for prey animals. But everybody sleeps.
I guess what I’m trying to say is that it seems to me like not every single reason we sleep comes directly from a physical reason; that there’s some kind of informational maintenance reason we have this cycle.
I don’t know what that reason could be, but I’ll try to give a flavor of what I’m talking about:
-
Our minds spend all day making connections that make sense in the moment. We create meaning by forming informational structures contained within a very narrow subset of all possible informational structures: only those structures that make coherent sense with the rest of our mind. Only those structures that fit our narrative, that connect our present to our past and allow us to exist in time.
-
Perhaps as an informational (mental, not physical) homeostatic mechanism, our millions of models inside our minds need to spend some time much less constrained by their neighboring models; they need to stretch their legs, as it were, and explore and thereby recalibrate themselves by acting independently from the group of other models that they are typically associated with during waking life. Perhaps by doing so, they can reconnect with the group as a slightly, by their own metric, improved model. Of course, while all our models are acting independently we lose our overall coherence and therefore our consciousness and must, therefore, sleep.
-
The models inside our head can be viewed as individuals in a group, where each individual is merely a group of smaller model-individuals. If those models act independently the group ceases to exist. Perhaps during sleep, all structures and all boundaries dissolve.
-
Another way to think of it is anthropomorphically; every model that can exist wants to exist, wants a seat at the table, wants a vote, but that means a time must be provided that contradictory models are allowed to express themselves; better to let those arise all at once while the body is shut down than allow them to trickle in during conscious focused attention.
Another reason I think sleep must have a purely informational-management component to it is that people who don’t sleep lose their minds - they don’t simply just get dumber or slower as the buildup of junk proteins in the brain might suggest they should; instead, they lose grasp on reality. It seems as though, far from merely being a hardware cleanup mechanism, sleep safeguards our particular architecture of distributed model fuzzy consensus intelligence from, for lack of a better term, some kind of ‘divergent over-fitting’ tendency that it naturally has: that it’s an informational-homeostatic mechanism for distributed model consensus.
Anyway, I’d be interested in hearing your thoughts on the topic? Does anyone know of any research for or against this kind of idea? Does this idea make sense?
Do you believe, as I do that the more complicated AGI we create the more we will find it effective and efficient to incorporate a time for disillusionment and sleep? Or is our requirement of sleep merely hardware derived? Is it required only by our particular biological hardware, and silicon-based intelligence will not need to experience it?