In my opinion, we're talking about two different things :
1.- HTM Theory itself
2.- HTM implementations, as Nupic or htm.java.
So, I've only got a lof of questions:
Is HTM Theory harder to understand than maths behind ML? I don't think so, HTM is more intuitive, you can explain it without any math. However I can find easily a lot of "low level" documentation about ML, while the HTM documentation is mainly exhaustive and detailed. Maybe a little hard for many people. When the book "HTM Theory for dummies" ?
Is Nupic a good implementation, not only from the point of view of the HTM theory adoption but anothers technical characteristics: user friendly, good documentation, modular, easy install, educational examples or similar? Have some clear metrics to check and compare accuracy of the network?
In my own experience a "welcome pack for beginners" would have reduced the effort to start. Not everyone can read the code directly. Maybe a visual build tool to put together sensors, models, classifiers?
ML have a lot of wrappers or interfaces, as Keras, you don't need to know almost anything about tensorflow to use it.
Anomaly detection on scalar series is the best example to show the potential of HTM Theory? Instead, is it not a paradigmatic mathematical problem? Do you want to be compared with decades of math research at your first step?
As usual, excuse my basic english.