I think another way to understand deep learning's limit is to understand why it works so well. I recently reported on a recent paper by Lin & Tegmark trying to understand mathematically why deep learning works so well in practice.
The long and short of it is basically this: 1) phenomena we're interested in can generally be described with low-order polynomials. 2) deep learning is good at approximating low-order polynomials. 3) Therefore, deep learning works good in practice.
Of course, there's a whole host of issues that are not addressed such as the enormous training set requirements, time dimension, continuous learning, etc. But I think it really helps to understand things.
So, knowing what deep learning is good at, you can easily see how to break it.