The weighted sum and adversarial inputs

If you view the response of the weighted sum to an input pattern in dot product cosine angular terms then generally as you store more pattern-response items in the weighted sum the angle between the patterns and the weight vector increases.
And one of those patterns is at a minimum angle.
When you are training a conventional neural network on examples for each weighted sum there will be an example that is at a minimum angle to the weight vector.
There is no reason the response of the weighted sum should be higher than that for any other input as that would be a super-normal response to a super-normal stimuli.
Perhaps a way to help avoid adversarial attacks is when the network is fully trained store the minimum and maximum response of each weighted sum over all the examples and then always clip to those limits for any input presented to the neural network.

HTM uses bit overlap of SDRs - not really the sort of thing that compares well to measuring the alignment of vectors.

1 Like