Why isn't HTM mainstream yet

Youtube comments are pretty bad anyway, so the people in the video made a game out of guessing which ones were real. The longer ones were obvious, but the shorter comments actually were almost indistinguishable at times.

Yeah, I definitely would help the community more if I had time. I’m working at a startup (we’re just entering the prototyping phase right now, so things are starting to get busy), and I’ve got a compiler for a really ambitious programming language I’m working on. I’ll need to write some code to test the compiler though, so I might end up making some HTM implementations with it.

2 Likes

I have been studying HTM theory and it’s implementation. It’s very hard to understand because there are lot of papers and videos but most of those are explaining some part of over all flow. Encoder, Spacial pooling, Temporal pooling, Classifier, etc…

For example, there are many materials for temporal pooling explaining in different way and for different versions. What I have done is to read pseud code in the source code.

I think it’s very important to provide concise written and visual materials for beginners.

1 Like

@Doug_King

It might be out of place here, but I wonder how you did that in the end. Have you published it somewhere?

2 Likes

I’ve been following Numenta for years now, I’d sometimes jump into the IRC channel to chat a long time ago. The learning material back then was very limited, but it has gotten significantly better.

The reason my interest in this project fluctuates is because I’m never really sure if I am researching the latest version of the theory. Or which code bases are updated to reflect the latest theory. I recall jumping into the IRC a couple years ago asking questions about some code on github, and someone replied “Don’t reference that, the theory changed and it hasn’t been updated yet”.

It seems to me that in order to be aware of the current state of the theory, and whats changed you need to be constantly monitoring the site and mailing lists.

I’d rather see some kind of versioned/source controlled spec of how the neocortex functions(Or at least the current consensus). A timeline of changes to the theory and then have it referenced in the code samples somewhere. Just some kind of confirmation that what I am working with is an implementation of latest research/theories.

All I am getting at is that no one wants to work off old research. Maybe just a new time stamped white paper every month or two so I know it hasn’t gone stale.

Anyway, that’s kind of how I’ve felt about Numenta/Nupic over the years.

2 Likes

Our latest and best resources are at https://numenta.com/papers-videos-and-more/ & https://numenta.com/biological-and-machine-intelligence/

3 Likes

The minicolumn organization within HTM networks are specifically designed for high-order, non-Markovian sequence memory.

2 Likes

Incorrect. Machine learning research is guided by mathematics (i.e. the scientific method or tool) and results/experiments (that show the usefulness of the models, etc.), which is what really matters to a scientist. Computational aspects, in this area, are also taken in consideration very seriously.

Note: even if HTM, in the future, becomes a useful theory of computational intelligence, ML research will never be “invalidated”, as it’s been very useful and it will likely continue to be (even if other “AI winters” occur, which is, I would say, unlikely, at this point). Just accept this!

Realize that this is just speculation (and opinions) based on your beliefs.

It does not work. If it works better than any ML technique (apart from, maybe, anomaly detection), then prove it to us!

I think the situation would be better described if we said that we’re actually comparing a type of apple with other types of apples.

I am reminded of my children on long trips.
After a while it gets tiring listening to “Are we there yet?”
The journey continues.

@nbro What has been demonstrated to date is one of the simplest and elegant one-shot learning algorithms. You have acknowledged that HTM is effective as an anomaly detection method when the learning is turned off - it is a learning monster when it is turned on.
Compare HTM to any of the methods outlined here.

While Numenta has not been actively working to exploit this property I can see how incredibly powerful it will be when combined with a few other technologies to harness this rapid learning property.

4 Likes

Reminds me of questions like this one which reflect how surprising that concept can be when it is first encountered.

2 Likes

So the question is why isn’t htm mainstream? Well for one I haven’t seen anything mainstream it can actually do.No one is gonna take htm theory seriously unless it can produce results and things like siri and facebook’s face recognition are viable successes of deep learning etc.

1 Like

@jason
Numenta is detail oriented and is working to make a solid basis before extending to the H part of HTM. I have no idea what their financials are but it looks like they are not having to work frantically to keep the wolf away and are taking the time to get it right. I expect good things from them.

I have been working with this stuff for decades; I don’t expect instant gratification.

HTM fills a very important piece that I have been looking for. (temporal memory) The one-shot learning was a huge free bonus. I agree that HTM on its own is not good for much. You could say the same thing about a carburetor for an engine - it’s is a very useful part but without an engine, it is not very useful. And without one the engine is not very useful.

My next important step is connecting the various parts into a working system. If you are a regular here you will know that I am working to combine Calvin tiles, HTM predictive, Deepleabra predictive, the attention/learning rate modulation feature of the RAC, and a simplified model of a lizard brain into a functional whole. All of this takes time as it has never been done before. Because of this, I can’t just follow a recipe and am forced to make up the parts as I go along. I am not working to a schedule and certainly have to burn up most of my time in a day job to pay the bills. It will get done when it gets done.

If you want faster results, roll up your sleeves and get busy.

5 Likes

@Bitking really not trying to be antagonistic about htm theory but honest about it. I think there are some very good good ideas ie sdrs just not sure about the appropriate implementation of it. However until someone can show that it can do state of the art results it will never be mainstream and taken seriously.

1 Like

HTM already produces state of the art results - it just solves different problems than most people have worked on to date. HTM is really only a powerful subsystem by itself; I don’t think that HTM by itself will ever be “enough” for a complete solution for anything.

Most of what I have seen (yes - even here on this forum) are people that want someone else to write an engine for them and then tweak a few things to make it work for whatever problem they are trying to solve.

It will take someone that understand what functions HTM brings to the party and the know-how to combine it with other sub-systems to make it shine. I have not met very many people that have the right skillset to see how to blend HTM with other parts to make a complete solution.

What is that skillset you ask? A deep knowledge of neural networks combined with a working knowledge of neuroscience and the functional architecture of the brain, practical experience with complex systems, and an experienced programmer. It would be helpful to be a seasoned electronics engineer. Being an aspie with OCD would be an asset. These are not the kind of things you find in your average grad student. Most professors tend to be good at one or two of these things but not the entire set. Once this unicorn tackles the problem and publishes the pathway to enlightenment it will become mainstream.

And that is why HTM is not mainstream yet.

4 Likes

Hey guys, I’m new to HTM, great work Numenta and company!

It is now past halfway 2018 and subjectively HTM is still not mainstream, how to measure mainstream’ness? No one knows really, how about hype? Easy peasy.

Anyways, I believe that the main accelerators either a person or a group of people of a particular application/theory are those who have the genuine interest in the theory’s origins. By this I mean, simply put, those who are interested and fascinated with how the brain works (biologically, computationally) will have the inherent potential to accelerate the HTM theory/application. This doesn’t mean those who does not have this interest will not have a potential as they may ride the wave later when the theory gains speed. Come to think of this, would you expect most of the hands-on developers (engineers, developers) to have a substantial interest in the human brain? I believe no. So the number of accelerators for the HTM theory, in general, are and will be not as limitless as the number of accelerators in Deep Learning. Deep Learning (ANN) is friendly to Statisticians, Software Developers, Mathematicians, even Managers, and Data *. Even if HTM is friendly to the same groups of people, would you expect them to stay happy reading about the neocortex? To become mainstream, there must be a significant amount of genuine accelerators. Just my opinion.

For me as a part-time brain enthusiast (thought experiments only because I’m not a Neuroscientist) with a humble background in theoretical computer science and spent my most of my time building dumb software in the industry, I see HTM as one of the great and ambitious brain theories of all time, its not just about AI or ML.

Cheers

6 Likes

Hi guys!

While I’m not new to HTM, I’m not an active practitioner. However, I’ve been inspired by Jeff’s book “On Intelligence” (fantastic read), and I’ve read Numenta’s papers and have played around with NuPIC a fair bit. This prompted me to hire a master thesis student to try and reproduce some of my results that used “traditional” ML, using HTM. I’m still waiting for the finished report (will share it with you once he’s finished), but the actual results were promising. While not Earth-shattering, he proved that HTM can be used in that particular context, and HTM’s efficiency with “one-shot learning” is a big plus.

Now if I want to take this further there are a few stumbling blocks in front of me.

  • Most of the HTM applications I’ve seen use numerical data. For text corpora Numenta suggests using Cortical.io APIs (which is essentially a SOM); this was not good enough for me (I needed to build a very specific and localised vocabulary, and it was an isolated system anyway). We ended up cheating a bit and using GloVe and then converting this to SDR. This worked very well but we both felt a bit “dirty” about it. Again, it would be good to have more concrete examples promoted that use text corpora, or other data types such as audio and images.

  • The ability to “generalise” is not really there, or at least not evident to me. This is a shame since “On Intelligence” put significant focus on higher-order functions, and particularly in later chapters where various aspects of intelligence were brought up, e.g. how some people become “experts” at something by essentially pushing the more basic abstract/generalised knowledge down to lower layers allowing the higher layers to “specialise”. While more traditional techniques don’t really generalise either, I’ve created factoid answering programs and chatbots using word embeddings and simple MLPs and simple tricks like drop-outs and “quasi” attention modules, and after 20 minutes of training time (on a heavily constrained dataset) I get something that really impresses me. I’ve seen in HTM forums that hierarchies and generalisation is an open issue, and that technically it’s possible to build more complex systems, but a concrete example, such as a simple QA system would definitely increase the PR.

  • To really understand HTM, one has to understand certain neurobiological aspects such as (mini) columns, NMDA spikes, depolarisation, excitation, inhibition, proximal, basal, and apical synapses, as well as things like SDRs and Hebbian learning. While Numenta has done a great job with the clarity of their papers, BAMI book and HTM School, it’s still significant load for a newbie. I’ve been struggling to follow certain discussions in the forums because they go into great detail in terms of cognitive science and neurobiology, and such discussions will always be a challenge for those of us who simply don’t have the time (due to other technical commitments) to dig deeper into neuroscience. In contrast, point-neuron architectures and “deep learning” basically require you to understand dot product and bit of differential calculus, and voila! And most technically inclined people know these concepts already from school. Even the current trend to integrate deep learning with external memory store, or introduce Gaussian processes is “ok”, as once again it builds upon knowledge that lot of engineers/scientists already learnt in school. Not sure what Numenta can do in this regard, but I think tackling the previous two points would help, and trying to abstract/generalise neuroscience as much as possible will also help.

That being said, I believe HTM introduces two crucial concepts - hierarchies, and predictive memory - and I’m seeing more and more of these two being integrated into traditional approaches, so this will indirectly help HTM and Numenta.

Anyway, just some personal views here, please discard if you think I’m spewing garbage :slight_smile:

//Armin

7 Likes

For an HTM-equivalent to these underpinning concepts (like dot product, calculus & gaussians) I think Cellular Automata is one good reference point.

Maybe not as widely studied as straight math but a very intuitive structure regardless. Each cell activates when enough of its neighbors are active (or some such update rule). HTM cells fundamentally do the same, except they look to their neighbors in time instead of in space and they learn who these neighbors are through transitions. Each cell can have many sets of neighbors (distal dendrites of synapses), and the degree of neighbor-ship is modulated online by incrementing & decrementing the synapses’ permanences. When not currently active, cells can also be ‘predictive’ (expecting to activate), as triggered by the learned temporal transitions.

The total structure learns by forming and un-forming links, using only incrementing and decrementing. I think this allows a potentially lower learning curve than the math behind MLPs, especially for non-math experts.

6 Likes

I’m a big fan of cellular automata as well. I think the big lesson from CA is that a very small set of rules can establish amazingly complex behavior. HTM is an extreme example of that.

6 Likes