I am struggling with concept of noise. I understand the definition of noise, i.e. cloudy/altered/partial input data point. What I do not quite understand is how this definition applies to actual data.
For example, if I am analyzing stock trading data, and I receive data in a stream, how would I know that data started becoming noisy at any point? Wouldn’t HTM treat this new data as “training” data and adjust connections and all other parameters to learn this new data? For non-biological agents, how can input become noisy? Perhaps I am missing something obvious here.
Another example could be converting to SDRs a stream of images coming from retina. If my body develops cataracts, I start seeing cloudy vision, generated SDRs will become different when compared to SDRs generated for the same picture if it weren’t cloudy. Is this an example of how my neocortex learned to see and is now tolerating noisy input? But is it adjusting its connections to “better” know how to deal with this data?