I’ve seen a lot of people denouncing Numenta/HTM, primarily because it fails to test its algorithms on well-known benchmarks. Since their algorithms went open source, has anyone tested them on benchmark tests like ImageNet recognition? Just curious if those algorithms truly work or not.
This question has been asked before.
The issue is that HTM does different things than other networks.
The comparison that comes to mind is comparing a F1 race car to a dump truck. They both have about the same horsepower engine but they use them in very different ways - one goes very fast and one carries very large loads.
DL is very good at mapping one high dimensional space to another. Exploring and mapping these spaces is a long and computationally expensive process.
HTM is very powerful in mapping states and transitions from one state to another. It can learn these transitions in a single exposure - something far beyond the current abilities of most popular DL models.
I would pose an alternate question - how do current DL models perform on HTM problem sets?