I agree with you and completely understand what the goals of Numenta are. Being a practitioner of DL, I’m also keen to apply, even if not biologically correct, whatever work best (where “best” == “good generalization” && “less training data” || “more efficient calculations” || " provability of how system arrived at conclusion ")
I support the Gospel according to Bami, but I’ll encourage anyone who feels to try applying variations of that to fit their specific situation. Maybe this thread should be moved to the “tangential” area?
My main thrust is that the inherent structure of SPs already create some pattern recognition, even without training. Combining this with some minimal training, even before HTM reaches its end goal, it can already start assisting with a wider array of real-world applications outside of strictly time series data (at which it excels). On a real level, the potential efficiency gains translate into real environmental savings. In the meantime, Numenta will continue its mission of understanding the biological functions of the brain, creating an ever more complete picture of how we learn and think. I don’t think there should be exclusivity between the two.
But it is good to have a moderating voice, and I appreciate you also taking the time to put your thoughts together into a response. Perhaps it would be appropriate to make sure we classify things as either in the “Pure HTM” vs. “HTM hybrid” categories. Ultimately, understanding how intelligence works in nature will lead to the most robust and efficient design, and it should be pursued without fail. But if we self-limited ourselves from using DL, because it wasn’t a biologically accurate model, we’d be needlessly limiting ourselves.
I don’t want to get stuck in a world of false dichotomies and artificial binary boundaries between ideas when it comes to applied solutions.
Deep Learning, or the ideas around it had been around for ~30 years before somebody was able to crack it and make it work (realizing that we just needed to have more data to process). Who’s to say that the feedback that occurs on those tangential projects doesn’t somehow assist with the main efforts of Numenta? (where some abstract observation might provide a clue to the main mission of research)
I feel we shouldn’t close off opportunities prematurely. Just because I and perhaps hundreds of others don’t succeed at something, doesn’t mean others still shouldn’t try.
One thing I observe here is a bias to form into camps (I suspect that’s just human nature), so that folks may naturally resist forming those cross-domain skill sets and knowledge bases. So somebody who has failed previously, whose only background has been neuroscience, trying to models the neurons in a strictly biological fashion (nothing wrong with that, by the way), might as a result only have the perspective of their domain. Our only real limits here are that we can’t violate the laws of math and physics. Beyond that, it’s open for exploration.
I work specifically with computer vision, heavily, for the past year, using both C++ and Python. I don’t believe I’m the only one here with this background either. But in a month’s time, I’ll be devoting solid blocks of research time to applying this area, without the strict biological constraints or complete adherence to HTM theory (as it currently publicly stands stands). Because I believe the overall efficiency over using purely backprop based systems will be worth it. After all, nobody’s up in arms over the slight divergence from strict HTM by cortical.io.
Feel free to point out when something isn’t pure HTM, but I do suggest avoiding discouraging folks (or even appearing to) from exploring this area. The community as a whole only loses out from that behavior.