Welcome! I'm not a Numenta employee, but if you consider the a neurobiological approach such as theirs, where the focus is to reproduce a biological model of neuronal function you'll see quite a bit more complexity and activity forming the typical processing cycle of a "neuron". Our understanding has come a long way since the '50s where we developed the typical neurons that are still used today in deep learning and other neural networks.
For instance, the typical pyramidal neuron has any where between 10k - 30k synapses, 10% of which form the typical feed-forward connections modeled in classical neural networks. However there are the other 90% which are mostly lateral connections which are not accounted for in today's classical NN systems.
Another thing is that instead of modeling synapses by using adjustable weights (to emulate synaptic learning), neurobiological systems (and thus HTM systems), use a process called
synaptogenesis, which is a fancy term meaning to learn by actually growing and culling synapses.
I'm not a neuroscientist, nor even a data scientist - so this is only my opinion about the differences in "applicable" algorithms which may be used to model neurons - but I would recommend watching Jeff Hawkins' very light and accessible talk here where he expounds on the differences between their approach and typical neuron modeling.
Cheers and nice meeting you!