What do you think of Wolfram's Hypergraph modeling idea?

In this discussion, he talks about searching for the most “structureless structure” as the necessary substrate to fundamental reality. I don’t think he says it explicitly but there’s really no difference between having that structureless structure as the actual substrate of reality and merely using it as a model. There’s no difference since all you can experience is models anyway.

Anyway it struck a cord with me because for a few years I’ve been obsessed with the idea that you can model everything as a graph, a network of nodes and edges. I feel this way because you can then say, Ok, this set of nodes is a single node in a more macro-scale network.

I think that is essentially what he’s claiming - you can model all of reality as a hypergraph, which is really just the same thing as grouping nodes in a graph.

What do you think?

Don’t you think this has implications for what intelligence or AGI really is? He’s saying, you can model the whole universe, on the smallest scale as a hypergraph, that’s really saying you can understand it as a hypergraph.

It seems remarkably identical in its most basic principles to HTM, as HTM essentially asserts the brain is made of a bunch of nodes in a graph, which is arranged in minicolumns which are themselves nodes in a more macro-scale network, which aggregate into regions, yet more macro-scale nodes.

Furthermore, HTM argues that not only is the physical hierarchy structured this way, so is the conceptual hierarchy of shared patterns among areas of the brain, highly interconnected areas broadcast patterns and names that are highly detailed, but the further away those connections reach the more broad and time-invarient the patterns become. For more on that see this conversation I had on reddit.

It just seems like what he’s after - computational theory of everything - is the same as Intelligence itself, and it is the same as optimally efficient distributed computing (memory management computational resources management and bandwidth management across a network).

It seems like its all one thing to me. Like what we’re after here, making intelligence machines, basically is or requires the ultimate unifying theory of every discipline, the alchemist’s masterwork, and that all you need is to understand how a language ought to be employed: the language of nature. The language of networks. Because ultimately we don’t want the universe, we want a way to describe the universe that implies the boundary of every possible model that explains or predicts the data.

What do you think?


some quote at the end of the long version:

“I don’t think there’s a bright line of what intelligence is. I think intelligence is, at some level, just computation. But, for us, intelligence is defined to be computation that is doing things we care about


Wolfram is a champion of the new computational paradigm of science. That’s the “new kind” of science he wrote a book about. It’s not just that computers have become a crucial tool for the scientist or that computational models are informative for scientific theories – it’s about computation being the fundamental idea behind all science, that with which everything can be explained. Mathematics evovles of course, but science still rests on a tenacious legacy of a more static mathematics of formulas and equations that is being replaced by more abstract and wider encompassing ideas in which the concept of computation plays a key role.

Are you saying the space of computational algorithms is larger than the space of mathematics?

No, I don’t think that’s a fruitful line of thinking. Mathematics is a territory of platonism, algorithms belong to the constructive domains and are characteristically agnostic towards the ideals of infinity. Computation probably belongs to mathematics, I would say, but the implications of this for science haven’t been fully recognized.

Well at least we can say computation is necessarily descrete where as mathematics can build continuous models.

Sure, but the realms of continuity are elusive. Wittgenstein highlights this in excellent critique of Cantor.


You can have a structure that can represent and compute anything from the identity function (which you may call complete lack of structure) to any shade of complexity. Or maybe you want to call vector f(x)=0 a complete lack of structure.
You can do that more naturally in transformed domains (holograph,Fourier) than in the ordinary world constructive domain where even the vector identity function requires a lot of piecing together. Hypergraphs sound like ordinary domain construction where you have to decide discrete combinations of things.

I understand this project as essentially about explaining physical laws as emergent patterns seen in descrete rules applied in the limit. With fear I’m treading into too deep waters, how is this done more naturally in the transformed domains you mention? Doesn’t that a priori assume an infinitesmal calculus or some notion of infinities which these graphs purposefully avoid (in the conception stage), therefore possibly making them recognizable as conceptually simpler?


There’s a chance I misunderstood, but from the Lex Friedman interview with Wolfram, what I got from his description of a hypergraph is that he’s leaving infinite/finite limits undefined, instead allowing whatever emerges to describe itself however it will. I got the impression he’s purposely not making any assumptions, in either direction, about the nature of time, while purposely being open to the idea that “tuples of data sharing same/similar values” will interact “whenever”, depending on their proximity to each other in the the vague structureless hyperstructure that he’s trying to explore.

Generally, I think if something can work without us having to define limits, then who are we to begin imposing any limits beyond those which are inherent to the structures of representation in which we choose to express a piece of logic.

Just like HTM’s ability to learn patterns and temporal sequences emerges from some rather simple rules applied in a certain order, let’s see how Wolfram’s ideas pan out.

What may be helpful thought food is to consider how the basic rules of interaction between “units” of data/encoding results in emergent structures even within minicolumns in the neocortex, as well as other, similarly shaped structures in older parts of the brain… or DNA/RNA, molecular structures, etc.

At least for me, it’s something that I can turn to an ponder whenever I need a thought meal. As long as we can remain agnostic about any assumptions, then everything is worth consideration.

It is natural to think that the ultimate theory of everything is the most abstract, i.e. the least specific and least particular. On the other hand, by the token of Occam’s razor we’re inclined to think that the best theory is the one which explains a phenomenon in the most parsimonious way. Real numbers are more general than descrete numbers in the sense that they can express more, but descrete numbers are simpler in that they stipulate fewer axioms. So it’s a matter of viewpoint on what one intends to lay claim on with an ultimate theory. Continuous space is perhaps closer to the viewpoint of “God”, but again, I agree with Wittgenstein when he says that there are things thereabout of which we don’t understand the meaning and of which it may be senseless to speak.

It’s basically the long-standing Wolfram’s point of view. Any math can be represented as a computational algorithm, but there is no math for some computational algorithms like cellular automata.

BTW, here are much more details about the mentioned Wolfram’s ‘theory of everything’ project https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/

1 Like

I’m a huge Wolfram fan, so much so I’ve actually created fan art.



There’s a Matrix feel to it ^^
I want a black and green version


Also worth investigating his approach now (when trying to describe non-euclydian spaces) which he calls “Rulian space”. @mrcslws could this be helpful to you for formalising your thoughts on Eigenspace and other ways to describe cortical operations underlying our perception of physical (3D) spaces?