A difference between BP and Evolution

Some recent papers show that for large neural nets there are no trapping local local minimums, there is always some direction in weight space that can give an improvement. Training just tends to get held up at saddle points that make learning sequential and slow. If that is the case is there any benefit to using evolution instead of BP for training?
One difference is the vanishing gradients problem that affects BP but not evolution. That would seem to bar BP from finding some solutions that evolution might.
Anyway I have an associative memory example, to change the topic: