Anyone know about Fractal Compression?
It occurred to me last week that a comprehensive Fractal Compression process may have applications in AI research but I don’t know much about it.
This thought occurred to me when considering the relationship between memory and algorithm, structure and computation, perception and behavior. Seeing as how in regular computing they’re disconnected and apart while in the brain the two concepts are highly intertwined I began to think of other things where the computation is informed by the structure and the structure by the computation.
Anyway, fractal compression came to mind and I wondered if someone here felt like maybe they could answer some of my questions about it.
Fractal compression makes use of the phenomenon of self-similarity being ubiquitous in the world. For me this has always been related to the assumptions about hierarchy that are made on the computational space in models such as HTM. The second central assumption HTM does, as presented in On Intelligence, is about data being contiguous through time. It would be interesting to know how this relates to the idea of fractal compression for video for instance. Speaking of which, isn’t mental life just like a fractal zoom?
My thoughts exactly. Not only mental life but all informational loop processing perhaps consciousness itself. If I think a thought, then formulate the thought into compartments we call words, then say the words, then hear the words, I will often trigger new thoughts that I didn’t have when thinking the original thoughts which produced the words. In other words, that’s a fractal zoom in and of itself, exploring the space. Many people have spoken about how we must use language to think (of course not language only, but conceptual models of which language is an example of the larger principle, that there can be no consciousness of thought without a delineation (a delineation formed in relation to the entire structure of thoughts) between thought).
Anyway, what do you think about an AI algorithm that uses fractal compression to not only store the raw data of whatever it’s looking at but also encode how best to manipulate that fractal compression process to see more accurately in the future? In other words, encoding the behavior into the representation of the observation?
(Isn’t that what the brain is trying to do anyway? Perhaps there’s a most efficient mathematical way.)
Yes, delineation, articulation, difference, repetition, eternal return, dialectics and the theory of form. It’s a familiar theme in the philosophy of metaphysics and epistemology, indeed. It’s as if, as a latent factor, the fractal has always been there in the thinking of philosophers since archaic times.
I happen to think that a successful AI algorithm approaching anything we would like to call human-like thought will need to incorporate something along the lines of fractal compression in the very architecture of the model. Somewhat like a notion of self-similarity being present in a hierarchical topology of neuron nodes. Like the logotype of Numenta come to think of it. And then the delineation, perhaps that’s effectively what the ReLU does in CNNs? Or what dimensionality reduction by random projection of sparse representation does in HTM. Reducing, categorizing, conceptualizing, conceiving by affirmation in a nonsensical chaos the atoms of cognition for an edifice of thought and knowledge.