Classification and Predict value of Output class

Hi All,

I have studied the HTM theory and currently playing with HTM implementation in java. I have checked few tests in PersistenceAPITest, they are related to anomaly detection. I have searched on forum about classical classification using HTM in which we have to predict value of output class and we have training examples.I have found two references.

But both discussions referred NuPIC for implementation. I have also studied the introduction and description of network api in HTM java detail. It is using SDR or CLA classifier but examples are focused on anomaly detection.
Probably my question would be very basic but Is it possible to do such classification in which we have binary class like yes no or male female?
Or HTM java is specially build for anomaly detection?

Regards,
@Usama_Furqan

1 Like

AFAIK classifiers are only used to extract predictions from the TM’s winner cells. If they were included in anomaly detection models and sample code, it was probably just because most of our anomaly detections apps started as prediction models. It is easy to switch between them.

So if you’re just trying to get predictions out of HTM.Java, yes you can do it. Here is the SDRClassifier you should use to do it.

Absolutely.

Absolutely not. :wink:

2 Likes

Thank you so much. :slight_smile:
I have another question that if I get results of anomaly detection from HTM java and I want to evaluate those results from NAB (Numenta anomaly benchmark) so should I follow NAB entry point link ? (https://github.com/numenta/NAB/wiki#nab-entry-points).
Or there is any other integration available for NAB with HTM java?
Because in the NAB usage section (https://github.com/numenta/NAB#usage) there are clear points related Nupic but no clear instructions written related to HTM java.

Regards,
@Usama_Furqan

1 Like

Hi @Usama_Furqan,

You would run the NAB using the HTM.Java NAB Detector. Detectors are specified on the command line I believe, but it’s been over a year since I messed around with it. You can try also writing the Numenta engineer who coded it… @lscheinkman for instructions…

Cheers,
David

1 Like

Thank you @cogmission
I am trying to evaluate classification and prediction capabilities of htm.java and that’s why i am playing with htm java example foxeatsDemo on development environment and I have few questions.

  1. As I beleive fox eats is classfication and prediction scnario then why we create network with only temporal functionality? why not spatial? As I beleive SDRClassifier is used for classification so is this example using SDRClassifier for infereing ? Like i have seen in hotgym SDRClassifier in setting parameter

p.set(KEY.INFERRED_FIELDS, getInferredFieldsMap(“consumption”, SDRClassifier.class));

  1. As foxeatsDemo shows example with what fox eats , but fox is not in previous seen data which means htm network would be unaware of fox. My question is if HTM is unaware of some thing then how can it find out its semantics and compare its similarity with other things? (same case with human brain if i don’t know what raccon is how can predit its features)

  2. FoxeatsDemo have apparently non temporal data but we are using temporal memory so are we internally feeding data with temporal sequence numbers? Like htm studio requires that we have to assign numbers in case of non temporal data.

Regards,
@Usama_Furqan

Every word is represented by a spatial SDR pattern. The TM learns the progression of these patterns in 3-word sequences. After learning a bunch of them, it can predict the 3rd term. The prediction is a spatial pattern representing a word.

You are right, it never saw fox. But it saw similar terms like coyote, and it learned what those semantically similar animals ate. It generalized to predict something a fox would eat, given the patterns it has seen before. The spatial pattern for fox is created by Cortical.IO, which also created all the other semantic fingerprints. They are all consistent, so the prediction resolves to a word that makes sense.

1 Like

If I may also add more to Matt’s response, which is spot on…

The SpatialPooler encodes spatial features of a given input - each bit containing semantically significant data relative to every other feature the SpatialPooler will encounter for a given problem domain. Cortical.io’s “Fingerprints” (read: SDRs) have semantic spatial information encoded into them relative to the entire Wikipedia knowledge space. Each bit then deriving it’s semantic meaning given a space of 16,384 semantic bits (the size of Cortical.io’s SDRs) - each bit and combination of bits form a subset of the entire Wikipedia semantic space.

So there is no reason to encode spatial data, because in this special case Cortical.io’s semantic encoding already has that.

Additionally, Cortical.io’s semantic encoding (Fingerprints) are reversible such that the meaning of a given Fingerprint can be decoded back to Human Language - a feature most definitely used in the Fox Eat’s demo - and is essential to the generalization back to Fox-like creatures! :slight_smile:

So that is one of the added benefits Cortical.io’s proprietary product brings to the table… :wink:

@Usama_Furqan

Have a look at Cortical.io’s API super demo video: https://www.youtube.com/watch?v=CsF4pd7fGF0 - it explains a bit more and you can download and play with Iris yourself!

1 Like