Sub-sub topic: Transfer Learning or Inductive Learning
Hello,
I have been here for a bit now but I am still a newbie when it comes to learning/using HTM. I am interested in all machine learning/neuroscience/brain philosophy approaches. I have a question regarding something that I came across while reading deep reinforcement learning methods.
Context: The Deep Reinforcement Learning technique was used by Deepmind (David Silver et al) to learn various Atari video games. What I found awkward about their approach was that to learn each game, the deep reinforcement learning algorithms started from scratch. That is there was no reuse of what was previously learned.
For example, as a teenager, I played Street Fighter and then Mortal Combat. In terms of learning for me, both of those games are quite similar. Some idiosyncratic moves may not translate from one game to another but the overall logic is the same. I guess this is called transfer learning or inductive learning or recursive learning.
I think this should be a quite obvious property, if we are going to build something intelligent. That is an intelligent algorithm should be able to use what is already learned to learn something new. For example, if you learned math, you should have easier time understanding physics…
I guess my question is would HTM be natural at doing transfer learning. That is if you trained HTM to play some game and then asked it to play a very similar game, would it need to learn from scratch?
Chirag