@Oscar_J_Romero - Welcome to the community!
I think that anyone here will agree that the scope of the HTM model is rather limited at this time.
I liken it to an intense focus on the transistor with the understanding that at some near future time this will be combined into useful larger structures like a computer. Numenta has stated that the focus on this integration into larger structures is anticipated in the future.
I know that you may be eager to see some sort of usable technology demonstration but I would like to point out that the PDP books came out in 1986 and the related technology models really did not hit their stride until the last decade or so. I don’t know if it will take 25 years but I think that it is unfair to expect instant results from the HTM model.
As far as the current progress of the deep learning community I think it is fair to point out that this extracts a single property of the brain (elaborations of layered perceptrons) and develops that mechanism to extract the useful property of data islands and manifold formation. The neuron used is a limited version of real neurons and fails to incorporate the learning mechanisms being explored in the HTM model.
Due to this simplification, the DL community has had to resort to heroic methods to load these structures with useful connection data. We know that the brain does most of this loading with far fewer presentations; HTM is an online system that learns with a comparable number of presentations and no requirement for supervision.
I anticipate a future fusion of the technologies to gain the advantages that each has to offer.
As far as a system levels approach to a high-level cognitive model needed to incorporate the sub-cortical structures - you are preaching to the choir here.
This thread incorporates most of my thinking along these lines, with various posts addressing different aspects of this very complex system.