This is one area of confusion for anyone who starts digging into some of the background materials, code, and research that current HTM theory is built on. It seems that for some time in its life, the temporal memory algorithm was referred to as Temporal Pooling. Early in HTM research, there was an understanding that temporal patterns are pooled in the brain into stable patterns which “name” a sequence for it to be used in a hierarchical fashion. Since hierarchy was a key idea in HTM’s earlier days, I expect this is why the algorithm was originally called Temporal Pooling.
I should point out that as a newcomer to HTM myself, I was not around during this time frame. I can only speculate that as the code matured, it probably became apparent that high-order sequence memory and pooling of sequences into stable representations really needed to be broken out into separate functions. I imagine this was probably the motivation behind changing the name of the algorithm from Temporal Pooling to Temporal Memory. Someone at Numenta can undoubtedly explain this better though.
I personally see this as more of a bookkeeping exercise than anything else. The distinction between the two functions was known early on in the theory. One example is Jeff’s Presentation at UBC Department of Computer Science in March 2010. At around 22:27 Jeff describes these as separate functions, and at 48:57 he describes one possible implementation of it (essentially feeding the SP of next hierarchical level with activity over multiple timesteps). I personally believe that particular implementation is missing some important properties (like lower levels being able to represent long sequences and complex objects), but it does point out that TP has always been understood to be an important part of the cortical circuit when it comes to hierarchy.
I still think the term “Temporal Pooling” itself is still a perfectly valid term to keep around in HTM vocabulary, because there is still a need to pool temporal inputs into stable outputs (the SMI “Output Layer”, for example, will require this functionality in the current round of research). Ultimately what TP should do is form a stable, sparse representation which “names” an object or sequence while preserving its semantics.