Interaction Networks for Learning about Objects, Relations and Physics

More quality work from DeepMind:
https://arxiv.org/abs/1612.00222v1

1 Like

The deep neural nets are superhuman but cost a lot in time and money to train.
Once created though you could connect them to more agile networks, that can do one shot learning etc.
I am running the code for the 2 layer network I mentioned before. With neurons in one layer learning to fire on common input patterns and a second readout layer producing the wanted output. It looks like it provides good generalization when faced with examples outside the training set. And as I said, only needing local learning.
However the environment is ultimately a lower dimensional manifold that you can dominate with memory, not layers of computation. I’ll have a try for an ultra fluid form of network that is highly memory based.
Just for example, while humans are physically coordinated, you probably can have something that is far more coordinated and fluid in its actions. It would be kinda cool to watch a robot like that in action.

So for each training example presentation I get one “neuron” among many to recognize/respond to patterns in the input data. Over time the number of trained (cue) neurons will build up. Then I have read-out layer/net. It determines which cues are important and which not. You do end up with surplus cues. I think in the biological brain even though surplus cues “do nothing” their output spike is still available to cause synapse building if there is a change in circumstances and the cue becomes relevant.
https://drive.google.com/open?id=0BwsgMLjV0BnhUXNKcmVKdzFWTGM

There is rather a contrast between biological brains and deep neural nets for sure. Different operating principles: http://rsos.royalsocietypublishing.org/content/3/11/160734