I agree with the general idea here except the fact there has to be some way for the “mind” to:
- integrate actual experience(s) with “words”. This is quite a vague requirement
- generate its own inner “words” out from experience(s).
What would that mean.
First why I put quotes on “words”. Because the term plays us a misdirection trick by making our attention narrowing towards spoken/written words and language.
Slightly more adequate terms (at least for programmers) are “pointers”, “identifiers”, “handles”.
And these “identifiers” are not only linguistic in nature, they are literally any recognizable thing with or without an associated word for it. A neighbor face you recognize without knowing their name, a familiar smell or melody you like but don’t recall where you heard it before. The dreaded qualia are simply that: recognizable things, pointers, identifiers, handles. Exactly like words.
Handles/pointers to/towards what? we should ask. Generally, everything (aka world) is made of things and only things. We cannot conceive/imagine a no-thing. So any identifier/thing is the handle which when “pulled” recalls one or more slightly larger experiencing contexts which are a relatively small grouping of other identifiers/words/things.
Ok all the above resembles a graph-of-knowledge. Which all of us are already using to describe what we know about anything we have words or descriptions for it.
There has to be a catch why while knowing the above we weren’t able to engineer an “artificial mind”. The old-school symbolic AI failed to create a mind by simply describing “things” and “connections” within knowledge graphs.
There is something that we are missing.
One important process we haven’t replicate is the one by which any-and-every-thing pops up into existence.
One very important part of what we humans consider “learning” is exactly this: (the means by which we) generate new identifiable things.