can we use HTM theory for video data and detect anomalies.
The raw video data has to be encoded in some way to be useful.
I am a bit curious about what you might think is an anomaly?
An overall change in lighting?
The amount of noise in the frame?
Color in some area of the frame?
The overall color balance?
Movement of some or all of the elements of the frame?
A scene change?
A change in the shape of some element in the frame?
There are so many ways to describe a video image that you really have to be very specific in what you are trying to accomplish. This would then lead to questions about encoder design and so on.
anomaly is change in image in video
If you spent any time with video encoders you would know that this is the core of what they do - coding the change in each frame.
HTM is not your answer - but you may get what you are looking for by researching the theory behind MPEG encoders.
There are also variational convolutional autoencoders, which should set features to specific neurons. Then you should be able to tell if there’s a high amount of loss, in whatever you’re looking for (image categorization, video prediction, etc). However, I feel they need some work, and getting them to work with video instead of images would also take some work.