Predictive systems FTW!

gqn
predictive

#1

“Neuroscientists have long suspected that a similar mechanism drives how the brain works. (Indeed, those speculations are part of what inspired the GQN team to pursue this approach.) According to this “predictive coding” theory, at each level of a cognitive process, the brain generates models, or beliefs, about what information it should be receiving from the level below it. These beliefs get translated into predictions about what should be experienced in a given situation, providing the best explanation of what’s out there so that the experience will make sense.”

“The prediction errors that can’t be explained away get passed up through connections to higher levels (as “feedforward” signals, rather than feedback), where they’re considered newsworthy, something for the system to pay attention to and deal with accordingly. “The game is now about adjusting the internal models, the brain dynamics, so as to suppress prediction error,” said Karl Friston of University College London, a renowned neuroscientist and one of the pioneers of the predictive coding hypothesis.”


#2

It’s exciting to see the big AI players paying attention to biology, hopefully this snowballs into more investment in this area.

The way they’ve implemented it seems like a fairly modest modification of other recent reinforcement learning techniques like those in OpenAI Universe. With an LSTM network making the predictions, it’s difficult to imagine any sort of invariance emerging. The language in their tweet seems a little misleading:

The Generative Query Network, published today in @ScienceMagazine, learns without human supervision to (1) describe scene elements abstractly, and (2) ‘imagine’ unobserved parts of the scene by rendering from any camera angle.


#3

DeepMind’s AI Learns To See | Two Minute Papers #263: