TM is the core of HTM.
Which in essence is Sequence learning.
Anomaly detection is Sequence recognition.
As you can see below from the 4 possible problems of SEQUENCE learning, TM can solve the first 3 in deterministic, closed loop scenarios.
The questions are then :

How does it solve problem 4
 does it need implementation of the Basalganglia
 you expect to solve this inside CC ? Does not seem feasible to pack this logic in the cortex !
(reusing BG seems much more plausible, because all species learn at least reaction)

How do you solve Stochastic sequences ?

How do you solve open loop ?
I can see adding Hierarchy can “close” the loop 3 (;), pun intended, by predicting longer patterns of the sequence.
That still leaves 1 and 2.
also check : https://machinelearningmastery.com/sequenceprediction/
These different sequence learning problems can be more precisely formulated as follows (assume a deterministic world for now): – Sequence prediction: s i , s i+1 , ...., s j −→ s j+1 , where 1 ≤ i ≤ j < ∞; that is, given s i , s i+1 , ...., s j , we want to predict s j+1 . When i = 1, we make predictions based on all of the previously seen elements of the sequence. When i = j, we make predictions based only on the immediately preceding element. – Sequence generation: s i , s i+1 , ...., s j −→ s j+1 , where 1 ≤ i ≤ j < ∞; that is, given s i , s i+1 , ...., s j , we want to generate s j+1 . (Put in this way, it is clear that sequence prediction and generation are essentially the same task.) – Sequence recognition: s i , s i+1 , ...., s j −→ yes or no, where 1 ≤ i ≤ j < ∞; that is, given s i , s i+1 , ...., s j , we want to determine if this subsequence is legitimate or not. (There are alternative ways of formulating the sequence recognition problem, for example, as an oneshot recognition process, as opposed to an incremental stepbystep recognition process as formulated here.) With this formulation, sequence recognition can be turned into sequence generation/prediction, by basing recognition on prediction (see the chapter by D. Wang in this volume); that is, s i , s i+1 , ...., s j −→ yes (a recognition problem), if and only if s i , s i+1 , ...., s j−1 −→ s pj (a prediction problem) and s pj = s aj , where s pj is the prediction and s aj is the actual element.  Sequential decision making (that is, sequence generation through action s): there are several possible variations. In the goal oriented case, we have s i , s i+1 , ...., s j ; s G −→ a j , where 1 ≤ i ≤ j < ∞; that is, given s i , s i+1 , ...., s j and the goal state s G , we want to choose an action a j at time step j that will likely lead to s G in the future. In the trajectory oriented case, we have s i , s i+1 , ...., s j ; s j+1 −→ a j , where 1 ≤ i ≤ j < ∞; that is, given s i , s i+1 , ...., s j and the desired next state s j+1 , we want to choose an action a j at time step j that will likely lead to s j+1 in the next step. In the reinforce ment maximizing case, we have s i , s i+1 , ...., s j −→ a j , where 1 ≤ i ≤ j < ∞; that is, given s i , s i+1 , ...., s j we want to choose an action a j at time step j that will likely lead to receiving maximum total reinforcement in the fu ture. The calculation of total reinforcement can be in terms of discounted or undiscounted cumulative reinforcement, in terms of average reinforcement, or in terms of some other functions of reinforcement (Bertsekas and Tsitsiklis 1996, Kaelbling et al 1996, Sutton and Barto 1997). The above exposition reveals the relationship among different categories of se quence learning, under the assumption of a deterministic world. Another assumption in the above discussion is that we considered only closedloop situa tions: that is, we deal only with one step beyond what is known or done.