Playing around with ChatGPT on questions of temporal awareness, pattern recursion and emotions as a biological equivalent of a basic broadcast mechanism. The question in my mind then arose that if an AI system apparently requires a goal/input in order to work, as in the LLM configurations at the moment as end-end single pass. A different perpetual recursion may inherently create a temporal awareness. i.e. allowing residual noise at the end of processing to accumulate and feed back in as and example. Yes, no adaptive memory or ability to change, I understand the current LLM workings and the very basic lack of adaptive or additive memory prevents a detection or detecting recursion. This is just a thought process or question.
That concept of “awareness” (processing the same information in close temporal proximity) would not be a human kind, just the aspect of resursively processing the same data, which brings up a second question.
If a recursive type AI had no goal as such, would the normal emergent process be to increase efficiency by resolving inefficient processing that is due to internal conflicts. i.e. training data that conflicts on output due to other inputs that are not relevant to the question (leaky cross signalling). The basic attempt to nullify the output noise, where it is allowed to accumulate to trigger an input. This is breaking away form the incremental LLM token building concept and more thinking along the likes of treating alpha timing as a token accumulation window of sorts.
Is this (noise reduction) part of the process we do when dreaming with the different direction of wave [1] activations in the cortext, i.e. the waves in sleep are in part a mechanism to identify conflicts by creating an identifiable pattern. Or is sleep just a mechanism to reinforce weak fire together temporal/spacial proximity with the waves allowing biological proximity to play into the process in a different manner.
The question that I had was would a fully recursive type model evolve an equivalent sleep process as a byproduct of the processing complexity, which mimics rem processes but by nature of the way the processing works would not require a human equivalent process as such as it would be an idle state default emergent process. The goal is inherently self evolved and is then to increase efficiency by way of reducing residual output noise.
I realise this is crossing different questions and aspects, however the aspect of “efficiency” as an emergent goal is the underlying question and thought.
Just a thought…
[1] The wave activations I remember seeing in a video where the waves cross the cortext almost as if a wave was passing over the top of the skull, rather than following distinct neural paths as such. The waves then pass in different directrions, left-right, right-left, front-back, etc. I think it was just during the rem phase.