Any questions for Jeff?

Thanks Matt. Yes, this is the sensorimotor question(s) by and large. So am more than happy to roll-in under that heading. Very keen to learn more about your new advancements to HTM core theory.

Where that might be extended is: a human body per se is not necessarily the only definition of “embodiment”. Jeff has, I think, articulated that “anything” that can move through a perceptive field and learn about the structures of that field is essentially exhibiting (cortical) intelligence. Thanks.

I can confirm that. A thing with intelligence does not have to have a physical embodiment to traverse spatial input over time. You can think of a web-crawler as a perfect example.

I would like to ask about new findings or theories that deal with the process of segregation of processing into different sub-compartments(color blobs in v1, strips in v2, and output to various areas, etc) and different output streams. For example the process of arealization, where not necessarily identical output is sent to the various areas an area connects to, likely leading to area specialization, any new findings or theories regards the level of genetic predetermination of between area connectivity versus algorithmic self organization processes?

It sounds like you are talking about the organization of cortical circuits occurring on a level that is currently higher up than current HTM implementations. We are still focused on what is occurring within one layer of cortex. We’re not yet modeling interactions between different regions, so I’m not sure he will have anything to say about that at this point.

EDIT: now that I think about it, this question fits into discussions about cortical columns, so we will fit something about this in. Thank you.

I am finalizing our talking points for this chat next week. If you have more questions, please add them soon.

Any more information/ideas on how hierarchy is build up and information flow from algorithmic perspective (rather than biological, which is somewhat covered already).
i.e. stylized view for implementation purposes.

I am curious about how to use HTM as supervised learning. I mean, I want to use data which is already labeled but I don’t know how to apply HTM

I’m curious about how CLA can take in consideration the context in detection/learning.

A simple example (yet striking to me) is how we can follow a conversation in a very noisy environment. If we known the language well, we can do it almost effortlessly. If we don’t know it so well… is almost imposible to follow the “narrative”.

Intuitively, one might think that we are using higher levels in the hierarchy assisting lower levels in the detection/inference. If the higher levels are not well connected that assistance is less efficient.

Thanks

Hi people,
I would really appreciate if you mention something about metabotropic glutamate receptors
and its possible relationships with temporal pooling future implementations.
I’d like to know if there has been some progress on that.

Thanks

If I were to mention in a paper that neurons depolarize for predictions, who or what should I reference as the source for the concept of neurons "depolarizing’ for predictions for faster inhibition? Should I reference BAMA, (is that original HTM theory?), or is there some other paper or theory that I should at least also credit? Also, do you have a guess at how widely accepted that idea is? It sounds great to me, but I have no idea if it’s common neuroscience knowledge, or HTM-centric theory.

Jeff came up with that theory based on existing experimental literature, so I think the best scientific reference is our Frontiers paper [1]. In that paper we also cite a bunch of relevant experimental papers. Those papers don’t use the term “prediction”, but the supporting data is there.

I have spoken with a number of neuroscientists about it (including one in our office today!) and they do think the concept is very plausible. However the specific experiments proving or disproving the prediction mechanism have yet to be run.

–Subutai

[1] J. Hawkins, S. Ahmad, Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex, Front. Neural Circuits. 10 (2016) 1–13. doi:10.3389/fncir.2016.00023.

2 Likes

You are amazing Subutai! Thank you so, so much!

I’m not sure the theory gets down to that level yet. We will talk about TP though.

Thanks Matt,
I have been looking forward to hearing something about that.
On my opinion this is a topic as key as sensory-motor inference in the theory.
Based on the book by Jeff, I understand that there is no other way of reaching temporal abstraction in feature acquisition.
Yet, there is no other option but waiting alert.
Best,
Dario

Hi Matt,

Sorry I rushed and joined here just to post this. There seems to be a deadline. If it’s not too late, some questions:

  • Regarding the scientific process: Is neuroscience still the only input into HTM theory? Now that neuroscience has established the core framework of HTM, could at least some of the anticipated future additions result from experimentation with machine implementations of HTM? E.g. with someone using NuPIC for some industry application, is it conceivable that insights gained from observing the application at work flow back into the core theory, constrained of course by the principles established by the initial neuroscience work? If yes is this currently happening?

  • Regarding transitioning from Von Neumann architectures to cortical computing: I understand cortical computing is anticipated to replace some, if not most computation currently performed by Von Neumann machinery? Is there any proposal as to how this transition will be brought about? Ditch all existing investments and start from scratch? Sort of like when we switched from turntables to CD players and had to re-purchase all our music on CD? Or can existing applications somehow be transmogrified into cortical ones, sort of like how we could rip our CDs to MP3s when iPods came out?

  • Regarding anomaly detection: It seems one of HTM’s core strengths is unsupervised learning. This means it can find “surprise” patterns in data that weren’t expected in the first place, or even find patterns in data that was not thought to exhibit any patterns at all. The brain doesn’t learn patterns only in select situations that are somehow likely to result in new patterns learned. Instead, if I understand correctly, it does it that all the time, everywhere.
    Machine implementations of HTM would therefore be best put to work casually in a broad range of streaming data scenarios, on the off chance that they might detect something interesting. Industries should liberally deploy armies of HTMs into their data streams without too much prior concern as to whether and what they will discover, and then harvest the results. It could end up being a case of only a few HTM instances out of many reporting findings but the findings then being very worthwhile, making the whole endeavor worthwhile.
    Instead, at this stage at least, Numenta advertises HTM technology primarily for the use of detecting anomalies, where “anomalies” means streaming data situations that are not only already known to contain patterns, but specific patterns in particular. This modus operandi seems contrary to the nature of HTMs.

  • Promoting HTM and its future: We need to develop HTM, or so I hear, either for the purpose of making a few dimes in a niche industry (monitoring wind turbines!) or for noble but far-off schemes like sending robots to colonize Mars. However, the bulk of the brain cycles spent on contemplating the future in the hive mind that is today’s global village goes to issues beyond the immediate and mundane while remaining with both feet firmly on the ground, and rightfully so in my opinion. For example, a lot of it is concerned with the internet and how to transition society to it: Finding stuff, establishing online trust and reputation, matching consumers with providers in a gig economy, making the whole thing scale. How does HTM plug into that?

  • Regarding the HTM research community: Any idea of what the current size of the HTM community is, for some definition of “size”? What’s the size compared to, say, the academic neuroscience community, or copies of “On Intelligence” in circulation, or views of Numenta YouTube videos? It seems small. Where is everyone. It’s like Numenta built a rocket that can fly to the moon and no one wants to ride it. Discuss.

Regards

Phil

I would say that neuroscience is by far the primary input. Of course we need to take shortcuts when appropriate to make software systems performant, which are informed by computer science.

Sure, I think we always try to do this. If an application that we expect to work based upon theory does not work, there is either a problem with the theory or the implementation. This is a normal part of the research cycle.

We’ll have to wait and see how this plays out. We are interested in hardware implementations of HTM, but not really working on anything.

I totally agree that this will eventually happen. We are not focusing on it now because HTM models still have a high memory footprint. It would be very difficult to deploy tens of thousands of models in the cloud today.

The internet is a rich playground for HTM systems with sensorimotor functions. Imagine all the ways in which humans traverse the Internet and take actions. It is rich with data and highly actionable. I think one of the first cool apps for sensorimotor functionality would be a kind of web crawler.

Discussion continued here.

2 posts were merged into an existing topic: Why isn’t HTM mainstream yet

Thanks for all your questions. I recorded with Jeff this morning. I incorporated a lot of the discussion here, but we focused on the sensorimotor inference extension to HTM for the most part. I hope to have the video edited and posted next week.

5 Likes

See the final video series here:

2 Likes