Any questions for Jeff?

Any more information/ideas on how hierarchy is build up and information flow from algorithmic perspective (rather than biological, which is somewhat covered already).
i.e. stylized view for implementation purposes.

I am curious about how to use HTM as supervised learning. I mean, I want to use data which is already labeled but I donā€™t know how to apply HTM

Iā€™m curious about how CLA can take in consideration the context in detection/learning.

A simple example (yet striking to me) is how we can follow a conversation in a very noisy environment. If we known the language well, we can do it almost effortlessly. If we donā€™t know it so wellā€¦ is almost imposible to follow the ā€œnarrativeā€.

Intuitively, one might think that we are using higher levels in the hierarchy assisting lower levels in the detection/inference. If the higher levels are not well connected that assistance is less efficient.

Thanks

Hi people,
I would really appreciate if you mention something about metabotropic glutamate receptors
and its possible relationships with temporal pooling future implementations.
Iā€™d like to know if there has been some progress on that.

Thanks

If I were to mention in a paper that neurons depolarize for predictions, who or what should I reference as the source for the concept of neurons "depolarizingā€™ for predictions for faster inhibition? Should I reference BAMA, (is that original HTM theory?), or is there some other paper or theory that I should at least also credit? Also, do you have a guess at how widely accepted that idea is? It sounds great to me, but I have no idea if itā€™s common neuroscience knowledge, or HTM-centric theory.

Jeff came up with that theory based on existing experimental literature, so I think the best scientific reference is our Frontiers paper [1]. In that paper we also cite a bunch of relevant experimental papers. Those papers donā€™t use the term ā€œpredictionā€, but the supporting data is there.

I have spoken with a number of neuroscientists about it (including one in our office today!) and they do think the concept is very plausible. However the specific experiments proving or disproving the prediction mechanism have yet to be run.

ā€“Subutai

[1] J. Hawkins, S. Ahmad, Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex, Front. Neural Circuits. 10 (2016) 1ā€“13. doi:10.3389/fncir.2016.00023.

2 Likes

You are amazing Subutai! Thank you so, so much!

Iā€™m not sure the theory gets down to that level yet. We will talk about TP though.

Thanks Matt,
I have been looking forward to hearing something about that.
On my opinion this is a topic as key as sensory-motor inference in the theory.
Based on the book by Jeff, I understand that there is no other way of reaching temporal abstraction in feature acquisition.
Yet, there is no other option but waiting alert.
Best,
Dario

Hi Matt,

Sorry I rushed and joined here just to post this. There seems to be a deadline. If itā€™s not too late, some questions:

  • Regarding the scientific process: Is neuroscience still the only input into HTM theory? Now that neuroscience has established the core framework of HTM, could at least some of the anticipated future additions result from experimentation with machine implementations of HTM? E.g. with someone using NuPIC for some industry application, is it conceivable that insights gained from observing the application at work flow back into the core theory, constrained of course by the principles established by the initial neuroscience work? If yes is this currently happening?

  • Regarding transitioning from Von Neumann architectures to cortical computing: I understand cortical computing is anticipated to replace some, if not most computation currently performed by Von Neumann machinery? Is there any proposal as to how this transition will be brought about? Ditch all existing investments and start from scratch? Sort of like when we switched from turntables to CD players and had to re-purchase all our music on CD? Or can existing applications somehow be transmogrified into cortical ones, sort of like how we could rip our CDs to MP3s when iPods came out?

  • Regarding anomaly detection: It seems one of HTMā€™s core strengths is unsupervised learning. This means it can find ā€œsurpriseā€ patterns in data that werenā€™t expected in the first place, or even find patterns in data that was not thought to exhibit any patterns at all. The brain doesnā€™t learn patterns only in select situations that are somehow likely to result in new patterns learned. Instead, if I understand correctly, it does it that all the time, everywhere.
    Machine implementations of HTM would therefore be best put to work casually in a broad range of streaming data scenarios, on the off chance that they might detect something interesting. Industries should liberally deploy armies of HTMs into their data streams without too much prior concern as to whether and what they will discover, and then harvest the results. It could end up being a case of only a few HTM instances out of many reporting findings but the findings then being very worthwhile, making the whole endeavor worthwhile.
    Instead, at this stage at least, Numenta advertises HTM technology primarily for the use of detecting anomalies, where ā€œanomaliesā€ means streaming data situations that are not only already known to contain patterns, but specific patterns in particular. This modus operandi seems contrary to the nature of HTMs.

  • Promoting HTM and its future: We need to develop HTM, or so I hear, either for the purpose of making a few dimes in a niche industry (monitoring wind turbines!) or for noble but far-off schemes like sending robots to colonize Mars. However, the bulk of the brain cycles spent on contemplating the future in the hive mind that is todayā€™s global village goes to issues beyond the immediate and mundane while remaining with both feet firmly on the ground, and rightfully so in my opinion. For example, a lot of it is concerned with the internet and how to transition society to it: Finding stuff, establishing online trust and reputation, matching consumers with providers in a gig economy, making the whole thing scale. How does HTM plug into that?

  • Regarding the HTM research community: Any idea of what the current size of the HTM community is, for some definition of ā€œsizeā€? Whatā€™s the size compared to, say, the academic neuroscience community, or copies of ā€œOn Intelligenceā€ in circulation, or views of Numenta YouTube videos? It seems small. Where is everyone. Itā€™s like Numenta built a rocket that can fly to the moon and no one wants to ride it. Discuss.

Regards

Phil

I would say that neuroscience is by far the primary input. Of course we need to take shortcuts when appropriate to make software systems performant, which are informed by computer science.

Sure, I think we always try to do this. If an application that we expect to work based upon theory does not work, there is either a problem with the theory or the implementation. This is a normal part of the research cycle.

Weā€™ll have to wait and see how this plays out. We are interested in hardware implementations of HTM, but not really working on anything.

I totally agree that this will eventually happen. We are not focusing on it now because HTM models still have a high memory footprint. It would be very difficult to deploy tens of thousands of models in the cloud today.

The internet is a rich playground for HTM systems with sensorimotor functions. Imagine all the ways in which humans traverse the Internet and take actions. It is rich with data and highly actionable. I think one of the first cool apps for sensorimotor functionality would be a kind of web crawler.

Discussion continued here.

2 posts were merged into an existing topic: Why isnā€™t HTM mainstream yet

Thanks for all your questions. I recorded with Jeff this morning. I incorporated a lot of the discussion here, but we focused on the sensorimotor inference extension to HTM for the most part. I hope to have the video edited and posted next week.

5 Likes

See the final video series here:

2 Likes