One of the “breakthroughs” for me is that the cortical.io people have formed the SOM all in a single batch. I am thinking that with an attractor model that is defined as the content is added (forming and shaping pools of attraction) the map will form as you stream the training set at it, and with continuous use after the initial training sessions. The stream encoder to spatially distribute the training would be a key part of making this work.
This is how I see the pools forming in my mind’s eye. Of course, the data at higher levels of representation would not look like a picture of the object.
I have been noodling on how to form both the grammar and semantic content with the same training process.
The latest frisson of excitement to hit me on this is the post about a chatbot on another thread. In it, I referenced the “frames organization model” of world information; I don’t see any reason that this could not be formed using the same process.