I am familiar with SDRs, even though it’s been a while since I’ve read something about them. I am also familiar with the normalization of the inputs for machine learning models, in particular, neural networks. The normalization of the inputs is often used to speed up training. SDRs, in a way, are a normalization technique that produces inputs with certain properties. I am looking for a comparison of SDRs and normalization techniques in machine learning. For example, could certain normalization techniques be used to approximate SDRs, so that they don’t have to be developed manually? Could SDRs be learned?
What is the relationship between sparse distributed representations and input normalization in machine learning?
I’m not sure I agree with you. SDRs are a semantic data playground. You have tremendous freedom in the ways you could use them to encode data. However, I do think that Spatial Pooling is a normalizer. Is that what you are really talking about?