Hey guys, have you seen this question on quora?
What an uninformed disgrace! If someone has not understood a concept and is not even aware of the existence of a community extending both into academia and open source, never mind even grasping the science and its goals, beyond commercial applications. Why would one draw conclusions about a term one is far from understanding and has obviously not put any significant effort into understanding?
Please note, I am not criticizing the cited article from Randall C. O’Reilly, Dean R. Wyatte and John Rohrlich. I am referring to the very misleading and uninformed description and allusions to HTM. Yes, I am downvoting right now!
I assume you downvoted the answer?
Severely subjective answer given by one Alan Lockett, PhD. What really irks me is his constant use of “Hawking”, arghhhh!
Something amiss…Answerer mis-spells JH’s name - and this one doesn’t appear in by-answerer Quora categorization…Answerer shows almost 100 answers on Quora - and syntactical style different from the one in link highlighted above…( I ) Don’t use Quora on a signed-in basis, so this may not be definitive…
AI research is full of twists and turns, so who knows what the future holds? Perhaps the next big wave will be based more closely on the neuroscience results from the last few years. Or perhaps some new experimentally developed framework will emerged.
Despite his answer being focused on short term results DNNs have offered hes still open to some other model being the right one for AGI, be it using HTMs or not
Talk is cheap, Let’s try to make some different
I cannot agree with the answer, but downvoting it will not solve this misconception. Clearly, the author has his own opinion but never did much in-depth research about it.
It would be great if someone could draft a neutral but in-detail answer to the question. Unfortunately, I do not know enough about HTM to do it myself (yet - working on that).
I believe it would be wrong to make this a heated discussion. In the end, this is science (not faith), and we should treat it as such.
On YouTube there is a video from Siraj Raval (also, not very in-depth) that is answering part of that question:
Agree. Let’s actually build something and show the world instead of rambling in our save haven!
I’ll draft something up this weekend, and post it here for discussion and revision. If we reach a consensus, I’ll post it as an answer to that Quora question. It is an old question (the current answer is from over a year ago), so we can definitely take the time and do it right, rather than just reacting to the other answer.
This is turning out to be a tad more difficult than I originally anticipated. The focus of the answer needs to be on accurately addressing the specific question (why HTM isn’t as successful as DL) and should not be focused on selling HTM. At the same time, it needs to address the fact that the question implies a particular definition of “successful” (i.e. widespread usage among the AI community).
So far I am focused on two points.
The first point being that HTM is relatively new compared to traditional deep learning technologies. Borrowing some figures that @Bitking pointed in another thread: the perceptron was introduced in 1957 but did not flower into a usable model until the release of the PDP books in 1986 (29 years). From PDP books, we didn’t see usable deep networks until the last decade (25 years). By comparison, HTM is still in its infancy, around a decade old.
The second point is that while the goal of HTM is to establish the working principles of the neocortex and model them in software, the specific piece of that goal which has been achieved thus far doesn’t solve a lot of problems that the AI community are typically interested in. It is state of the art in one-shot learning and anomaly detection in streaming data, while weak in high-dimensional classification tasks. As HTM captures more of the working principles of the neocortex and becomes applicable to more problems, it will likely gain more interest from the AI community.
Let me know if anyone has some additional points that should be focused on in the answer as well (or additional support for these two). It might be good to work in some of the points from the Framework paper.
The work is based on the foundations of Luria (mass action) and Mountcastle (universal computing columns). These are combined with the insight of predictive coding elucidated by Jeff Hawkins and Sandra Blakeslee in “On Intelligence.”
It is being extended from these first principle to embrace other known biological observed behaviors such as grid coding.
The work proceeds methodically as it has to both explain the biology and be tested in software to validate the proposed models.
Reference on Luria & Mountcastle:
Just my 2 cents, I’d focus on comparing and contrasting the 2 (HTM & DL) on 2 major aspects - Neuroscience and Computational.
The reason is that the OP’s question is dragging HTM in the DL playing-field (not flat) which has lesser Neuroscience and more formalized Mathematics aspects. I would encourage the OP and other readers to understand that even though HTM has great learning capabilities, IMO the dynamics of its progress has more differences than similarities to DL’s. Therefore, it must be treated as a totally different path in ML or MI. Also, I’d focus more on contrasting and finally conclude that comparing the two using only the playing-field of the other would be not very useful.
I think the debates for this ML and MI war arent going to help because of 2 reasons:
- we arent really ready to present the full potential of our tech because the majority of the ML space is focused jusdt like @Jose_Cueto said are focused on classifications, their minds are set on that being the crown jewel of real intelligence
- we dont have the mind blowing examples ML and MI people have like image recognition, sound classification that google has done, and practical experiments done on platforms like youtube and facebook that pick what video, post or link you should click where the goal is maximum engagement
we dont have a goals mechanism yet
we’ve shoehorned in classification with anomaly detection, to a point where it would be efficient for the programmers time to just work with DNNs and get over with it
we dont have any mind blowing, hands on examples of what HTMs can do
NNs and other ML mathematical approaches have walking humans, dinosaurs, somewhat functional cars etc etc
we have sweaty gyms
we’re too eager to just jump in this debate, lets just work on the gridcell framework and then work on stuff that people can play hands on with it and blow their minds with what HTMs can do
I am hoping to avoid fueling any debates. This question is brought up frequently, though (and not just by DL fans out to bash HTM), and I feel like it has a definitive answer. Part of that answer is what you and @Jose_Cueto have pointed out, that HTM doesn’t approach intelligence from the same perspective that DL does. The points I mentioned earlier elude to this, but additional emphasis on these differences and the relative immaturity of HTM today may be warranted.
LOL, we have way more than that, but I get your point. HTM is still young, and has a whole lot of potential.