"How Your Brain Organizes Information" video

Exactly the problem I’ve been talking about with all my posts. The belief there will be factors is hard to get past. We see factors. Objects. That’s all we see.

It is only when you try to nail them down computationally that things start to come apart.

I believe the inconsistency of the classes/“edges” you find is what leads to parameter blow out with large language models. That’s the reason they’re “large”. And, together with the fact that they hide grammar, why they have done so much better than previous attempts to “learn” grammar.

If you look for it you can trace a history of failure to find consistent grammar/“edges” for language, e.g. in this earlier thread:

Or in this later thread:

In that last post I discussed an extension to a failure to find objective categories more broadly in philosophy:

Just recently I found another nice discussion in a pure graph context:

The Many Truths of Community Detection
http://netplexity.org/?p=1261

‘It all comes down to the fact that we have mathematical ways to quantify the difference between community assignments but defining what we mean by “the best” clustering is impossible.’

It’s still what’s holding us up in AI. We believe the objects we divide the world into, are reality. They are all we can see. When in reality it seems the only way to build them computationally is as ever so many subjective constructions.

But seen from another perspective, this is actually a good thing:

2 Likes