I have been messing with NLP using Temporal Memory in the weekends. I coded up a simple program with and made it to learn the structure of Esperanto sentences. (I’m testing with Esperanto because the language itself is regular and has obvious features.)
And my results are… not surprising. HTM isn’t the most powerful ML model yet and it performs close-to or slightly better then a RNN. Yet it is still powerful in the sense that there is no need to backprop all the way trough the entire sequence thus making it possibly suitable for long sequences.
This is by blog post detailing the results.
BTW, It guess it is not NLP since Esperanto is not a nature language.