Interesting find, fresh from the MIT pen:
Somehow structures within the world can emerge in Neural Nets without explicitly training for them. Unvoluntarily I have to think of SDRs…
Interesting find, fresh from the MIT pen:
Somehow structures within the world can emerge in Neural Nets without explicitly training for them. Unvoluntarily I have to think of SDRs…
Humans acclimatize so quickly they don’t even recognize they are in the middle of a singularity. They might as well be out picking petunias they are so light of heart.
Anyway, cool.
Personally, I would not call this conceptual knowledge. They identified the groupings of neurons that had been trained do respond to different stimuli. The humans are digging into the neurons and categorizing the concepts, as they understand them. Once they are identified, they can be invoked in a generative fashion. This is fully within the realm of what I expect today’s DL networks can do.
That being said, it only goes to show how much more we can squeeze from current techniques. This is a very cool finding. Just imagine the art programs and game environments people are going to create with it.