If neural networks are hierarchical context sensitive associative memory

If neural networks are hierarchical context sensitive associative memory then the more out of distribution the input the more generic (common, typical, frequent, average) the output should be. And as context is put back in (more in distribution) the more specific the output response should be.

1 Like

Maybe start with a hypothetical case where an associative memory layer allows all the information at its input through, likely in a highly mangled form, but still there. First you find a general case can be remembered - and then when more specialized cases are learned then they can branch off from the general case.
Then you can work backwards to the real world where layers do cause information loss.

1 Like