Standard Benchmarks and Scientific Rigor

Can anyone direct me to benchmarks that substantiate the claims of HTM relevance to visual recognition and NLP ?

The claim is made here on the github site:

“There are many things humans find easy to do that computers are currently unable to do. Tasks such as visual pattern recognition, understanding spoken language, recognizing and manipulating objects by touch, and navigating in a complex world are easy for humans. Yet despite decades of research, we have few viable algorithms for achieving human-like performance on a computer.”

I have this one- “Continuous Online Sequence Learning With An Unsupervised Neural Network Model” but as far as I can tell it only uses a NYC taxi-cab dataset for comparisons (not vision or language related) unless i’m mistaken.

At this point, most of the computer vision and NLP research community validate on standard data sets and publish those results (MNIST, CIFAR, NETFLIX-PRIZE…) . I can provide you an exhaustive list of these datasets that, if used, can help your cause.

1 Like

NuPIC doesn’t do well on vision tasks. We (meaning Numenta in this context) have never made claims that it can perform vision tasks effectively in its current state.

And regarding NLP, we’ve done some experiments and investigations. We have shown that HTM can generalize a bit given sequences of semantic word-encodings provided by Cortical.IO, but we make no claims about it’s efficacy.

That is not a claim about HTM, it is a statement of fact about biological intelligence vs today’s artificial intelligence.

The goal behind HTM is twofold:

  1. Understand the principles of intelligence in the neocortex
  2. Implement software based upon those principles

Currently, HTM can make inferences and provide anomaly indications against streaming temporal scalar data pretty well, and we have an anomaly benchmark already.

1 Like