That was excellent, seemed to clear things up for me quite a bit! I think the word you used, “context” was perfect, then you changed it to “reference frame.” Why do you prefer that term over context? As I consider the “scale” hierarchy I think of larger scale data, as it travels down the hierarchy to V1 for example as context information. Broad contextual information, “here’s how you should view whatever you’re looking at.”
I think the sight I was trying to express or explore was this idea, let me approach it from a different angle:
Forget about the brain for a minute. Just think of a network of nodes; just the concept of a network in general. Give those nodes random connections, but give each node about the same number of connections (a connection being: who the node listens to). Now pick a node at random. That node will have a larger number of connections to “nearby” nodes than “further away” ones. You may ask, ‘what metric for distance are you using here?’ The distance metric is ‘how many nodes are between me and this node, other than our direct connection?’ Some nodes will be very far away by that metric, most will not. Let me draw that out:
Look at A. B, C and D are all really close to A because they’re all really close to each other, they’re all nearly interconnected. But E is not close to any of them. If A lost it’s connection to E (in this diagram we can consider each like to be bidirectional), it would have to wait for the information to propagate up the left side of the image to D before A could hear about it.
Why is this important? because we can consider A’s connection to E to be a connection to a higher level node. A connection to a larger scale node. And we can consider ADBC to comprise a smaller scale node.
Nodes that are highly interconnected, or close, are of course going to be talking about the nearly same patterns, whatever their communication ends up being. They’ll have a similar protocol, further away nodes wouldn’t they’re far away. But there are some patterns that are similar, just fewer of them.
Realizing this same pattern is true for every node, it seems the scale hierarchy is nearly a function of the number of connections each node has. C is connected to some node that represents a far distance from their little clique too, so does B and D presumably. In this way each member of the group has feelers out in different directions to the larger hierarchy. They can take the information they get from their distant connections and translate it for their inner circle, since they each specialize in the protocol and patterns shared in whatever remote corner of the network they happen to be connected to.
So I wondered if a generalized AI network is one in which each node is always trying to connect to the furthest away nodes from its localized group. But anyway I think that’s the extent of my thoughts on the matter… except for this: Since it’s all so elementary I’m sure there’s some metric in mathematics or graph theory that describes this perfectly well, I’m just not educated enough to know about it.
Anyway, I really enjoyed your thoughts on the topic @rhyolight