Esperanto NLP using HTM and my findings

One of the main difference between RNN and TM is that RNN is a universal function approximatior. While TM is basically an Series AutoEncoder in a box. RNNs has a huge advantage that it can learn hierarchical meta-representation of sequences by stacking RNNs together and trough back prop. While in HTM we are stuck with one single layer of TM. (Although HTM is way more sample efficient than RNN)

I think HTM need some automatic meta-learning method to compete with neural networks.

2 Likes