Double weighting for neural networks

The weight and sum function in a neural network is a dot product. If you wanted to do a layer to layer fully connected net in a straight forward way you would need a lot of weights and a lot of compute effort. I showed before how to dodge that using random projections (RP.) The problem with RP is the network is void of any spacial regularity, you won’t get any rotation/scaling/translation invariance occurring naturally. A generalization maybe can allow you to address that:
In a trained net I wouldn’t be surprised to find regular repeating patterns in the first weighting vector before the Walsh Hadamard transform is applied.