Any questions for Jeff?

Hi Matt,

Sorry I rushed and joined here just to post this. There seems to be a deadline. If it’s not too late, some questions:

  • Regarding the scientific process: Is neuroscience still the only input into HTM theory? Now that neuroscience has established the core framework of HTM, could at least some of the anticipated future additions result from experimentation with machine implementations of HTM? E.g. with someone using NuPIC for some industry application, is it conceivable that insights gained from observing the application at work flow back into the core theory, constrained of course by the principles established by the initial neuroscience work? If yes is this currently happening?

  • Regarding transitioning from Von Neumann architectures to cortical computing: I understand cortical computing is anticipated to replace some, if not most computation currently performed by Von Neumann machinery? Is there any proposal as to how this transition will be brought about? Ditch all existing investments and start from scratch? Sort of like when we switched from turntables to CD players and had to re-purchase all our music on CD? Or can existing applications somehow be transmogrified into cortical ones, sort of like how we could rip our CDs to MP3s when iPods came out?

  • Regarding anomaly detection: It seems one of HTM’s core strengths is unsupervised learning. This means it can find “surprise” patterns in data that weren’t expected in the first place, or even find patterns in data that was not thought to exhibit any patterns at all. The brain doesn’t learn patterns only in select situations that are somehow likely to result in new patterns learned. Instead, if I understand correctly, it does it that all the time, everywhere.
    Machine implementations of HTM would therefore be best put to work casually in a broad range of streaming data scenarios, on the off chance that they might detect something interesting. Industries should liberally deploy armies of HTMs into their data streams without too much prior concern as to whether and what they will discover, and then harvest the results. It could end up being a case of only a few HTM instances out of many reporting findings but the findings then being very worthwhile, making the whole endeavor worthwhile.
    Instead, at this stage at least, Numenta advertises HTM technology primarily for the use of detecting anomalies, where “anomalies” means streaming data situations that are not only already known to contain patterns, but specific patterns in particular. This modus operandi seems contrary to the nature of HTMs.

  • Promoting HTM and its future: We need to develop HTM, or so I hear, either for the purpose of making a few dimes in a niche industry (monitoring wind turbines!) or for noble but far-off schemes like sending robots to colonize Mars. However, the bulk of the brain cycles spent on contemplating the future in the hive mind that is today’s global village goes to issues beyond the immediate and mundane while remaining with both feet firmly on the ground, and rightfully so in my opinion. For example, a lot of it is concerned with the internet and how to transition society to it: Finding stuff, establishing online trust and reputation, matching consumers with providers in a gig economy, making the whole thing scale. How does HTM plug into that?

  • Regarding the HTM research community: Any idea of what the current size of the HTM community is, for some definition of “size”? What’s the size compared to, say, the academic neuroscience community, or copies of “On Intelligence” in circulation, or views of Numenta YouTube videos? It seems small. Where is everyone. It’s like Numenta built a rocket that can fly to the moon and no one wants to ride it. Discuss.

Regards

Phil