I wonder if there is any simple anomaly detection example using some data source other than
I have my 64*64 images of type uint8 and i make a 1-D numpy array of 0-1 with size of 64*64*8 = 32768 out of them. now i have difficulties with running an example with my input data and i couldn’t find any example of such case.
Obviously such data doesn’t need any encoder/encoding right? beside that could HTM/nupic handle such a long arrays of data? because in example codes there are small records with few fields…
Let’s start at the beginning to better understand what you are trying to do…
What are the images of? Are they sequential?
Thanks for reply.
They are brain mri images, so one brain with layers of mri scans, for several brains.
But right now it doesn’t matter, i think. i just can’t find any walkthrough example of using different data sources beside csv file. even those csv files are simple with one or few fields so i wonder if whole network can handle some serious data?!
Hey @mese79, so NuPIC is setup to handle sequential data of certain data types, which at this point do not include images (to my knowledge). There’ve been discussions on how to encode image data like yours into sparse distributed representations, though the images would still need to be structured in time for NuPIC’s outputs to make sense.
Also what kinds of anomalies would you expect to find in your images? NuPIC excels at finding contextual anomalies in sequential data, so its viability as a tool depends on the nature of the anomalies. If an anomaly is an image that looks much different from the others, you may be better off with a CNN or something that looks intently for specific spatial features.
It really does matter. You will get much better image classification results using state of the art Deep Learning techniques. HTM is not a good spatial image classification system. It evolved to memorize temporal sequences. That is where it really shines.