From ideas to theory to code at Numenta

Hi @Jose_Cueto, a few answers below, others may correct me.

A concrete implementation (like NuPIC) comes strictly from Numenta’s HTM Theory.

Numenta’s HTM Theory comes generally from wider Neuroscience theory, but is not necessarily strict.

The Numenta team is constantly deliberating new Neuroscience theory, and investigating how to integrate that with existing HTM theory. It is a big part of their process. This community is actually a part of that process. When there is a big step forward (like the SP, or the new Location work), Numenta releases the updated HTM Theory, along with scientific materials and implementation details.

Check out their blog, they are very open with their process: Numenta Blog

Yes, for example: In biology, in a column of neurons, each neuron has it’s own feed-forward input dendrite segment. But in HTM Theory, this was simplified to a single shared dendrite segment for the entire column. Numenta is focused on biology, but will gladly make smart computational moves as long as results are still good.

This is my same situation, and that of many others in this community.

If a community member were to come up with a serious and applicable theoretical discovery, I think you would find the Numenta team very supportive.

If you really wanted to have an impact on HTM Theory, you could prepare yourself and try to get an internship or job at Numenta. They are working on this all day long every day, which tends to be a more effective approach than as a lone community member.

2 Likes

First of all thank you for your response.

I was thinking more of from an algorithm/enhancement to a pursuit of finding its biological equivalent/s. Say, for example, back propagation to the pursuit of finding which part of the brain is doing gradient-descent-like operations (ok just an example only). I guess maybe this reversed method is counterintuitive or useless in the NS perspective, I don’t know, this is why I asked about the dynamics internal in Numenta. Another reason also why I asked this is that some of the existing/future algorithms (e.g. SP) can be represented using a math perspective which may open new discoveries/enhancements again in the math perspective (can be done by non-NS people), further they may be re-evaluated or deliberated to see if there is a biologically plausible equivalent.

When you say “theoretical discovery” does that mean in the NS side of things?

Thanks that would be the hardest path for me I guess, apart from being old for an internship, I love my wilderness (Australia). I believe though that on other non-NS aspects we can contribute to HTM, however, the unsure part is the acceleration and commitment of these ideas/projects.

IMHO maybe 20% of the population in this community must try to make an impact on HTM theory/implementation/applications, to contribute on HTM’s progress. Sorry for digressing.

Cheers

3 Likes

Regarding community ideas… As Numenta’s community manager, I am always trying to understand the theories our community members are discussing. Some of it goes way over my head, honestly. But I try. If I think something is interesting, I will discuss it on the forums, sometimes directly over PM, and sometimes I’ll even call or skype a forum member to have technical discussions about their ideas. I have brought up community ideas to researchers in the past many times.

I love seeing the community discuss theory, but most of these discussions are out of scope for Numenta researchers. Our missions is driven by Jeff and his direction, which is currently object recognition in the neocortical common circuit.

The team has some autonomy, and new topics are introduced without too much preliminary investigation, but all ideas are thoroughly vetted with respect to biological plausibility before they are truly considered for integration into theory. We write code along the way to test out ideas, which is in the research repositories.

I try to expose as much of this as I can to the public, but too much exposure can be distracting to our core research mission. So we have monthly hangouts with researchers so you can ask questions, and I try to record and publish things from the office whenever I can.

4 Likes

Thanks for your inputs.

If I may ask what is Numenta’s view about computational aspects of HTM, to be more specific, do you guys have plans to formalize it using math or any other language that can express itself, so users can manipulate or extend it? Reason for asking is that, HTM may be highly based on how the brain works that’s why it works, but Im pretty sure it has properties that are worth studying independently. For example, by intuition, I think that HTM algorithm (as a whole) falls in the swarm intelligence domain, and there are many interesting topics in there worth studying, also HTM might be doing something new, perhaps its step rules are new or more superior.

1 Like

You may wish to look as set theory and see if that is a productive investigation.

I don’t know if there is a branch of set theory that explores higher dimensions.

I can see some vague connections but I don’t have the mathematical background to work this up.

Each input dendrite and output axon has synapses that are an intersection between the neurons.
So what you have is a many to many set relationship with intersection sets that are formed by training.

1 Like

I have started investigating the HTM in a math perspective, I’m no math guru but my intuition in math is my strength (more on reading and less on formulating). And yes, I see set theory is very well fitting to express the operations in HTM. In relation to this, based on my investigation, I find the Automaton and Information Theory, have the potential also to express some parts of the HTM.

It’s too early to discuss this (without me giving any proof), but the HTM’s emergent algorithm has a lot of similarities with the Ant Colony Algorithm/Optimization which have been widely used already. However, like the ACO it is quite hard to prove if the algorithm is optimizing or not (mathematically) even if we know it is working. Also the reason why it’s somewhat useless to do a “DL vs. HTM” as if they are competing algos because they fall in roughly different domains.

I also think that (by intuition) the SP and TM are doing the same algorithm, except that they operate on different levels of abstraction.

3 Likes

Almost.

The predictive set is compared to the actual set in the next time step.
The degree of overlap of these sets drives a decision on output - you could say it conditionally forms or enables an output set. Where I get lost is how these massive number of simultaneous sets can be be described in a rigorous way that can be examined and manipulated. I have not learned the tools required to do this.

1 Like

As a memory system and strictly speaking the TM and SP are simply organizing/storing data. The decision part (e.g. prediction) is an application that takes advantage of the state of the memory. I like to view it this way to simplify things a bit. But yes I understand what you meant in this statement. Cheers.

2 Likes

I understand where you are coming from but if we want HTM Theory to be an encapsulating and inviting field of research we have to have a vision better than that.

Someone needs to know both computer science and neuroscience AND research with a knowledge delay (not working at Numenta) to put up some concrete stuff out there. It is already hard to contribute as an outsider. I’ve seen many passionate people come and go who actually does stuff, since 2013. I am guessing there comes a point where most of them realize that there will always be this knowledge delay and this waste of energy on working at probably already solved stuff. Naturally one would either detach his\her research from what Numenta is working due this inefficiency or work on their own version which is also inefficient to maintain and somewhat competing with HTM.

IF HTM is envisioned as something bigger than Numenta, it should not boil down to deciding to either work at Numenta or work on something else. It is already hard for me to convince any Ms., PhD. student or academician to work on a theory that is centered around a single company/research institute. So I hope Numenta has a vision for HTM to be bigger than the company itself.

6 Likes

From my perspective, Numenta does an excellent job of communicating very early and throughout the process what specific part of the theory they are working on. This gives those in the community with the proper skills a couple of options.

  1. They can focus on those same areas. This is to some extent inefficient because Numenta does this full time, and I think most community members are not able to apply the same level of effort. On the other hand, I wouldn’t call it a waste of effort. I believe it is an important factor in fueling the conversations, questioning assumptions, and ultimately solidifying everyone’s understanding of the theory (including the folks at Numenta).

  2. They can focus on other areas of the system that we know will need to be researched eventually but that Numenta is not currently focused on. This is more efficient, because when Numenta eventually shifts focus to these areas in the future, a lot of ground work will have been done, which should give a nice jump start.

A third option for folks like myself who do not have a neuroscience background, is to tinker around with some of the missing pieces using strategies that are probably not biologically plausible. This is probably the least efficient option, but still I think not an complete waste of energy. When you present the results of these experiments, it fuels a lot of great discussions about how these problems are likely solved in biology. I think this is a way to get more folks engaged in the theory, rather than just sitting back and waiting for Numenta to post updates.

7 Likes

Thank you all for your answers. Key points Ive learned,

  1. Ideas are deliberated at Numenta but still unsure of their process.
  2. Community can almost post-contribute on anything disclosed by Numenta depending on their interests
  3. Still unsure how important is computer science in the pursuit of building HTM.

Dont get me wrong please I’m always grateful for Numenta sharing their work. I just want to understand the process of promoting an idea/theory to code or math formalization whether it came internal or external. Maybe I’m used to seeing source code being promoted in the deployment pipeline and prefer to see a clear path of my work before I even make it.

Numenta has stopped working on their HTM implementation, Nupic, in order to focus on what they best at, neuroscience. If you are interested in working on an open source HTM, may I suggest the community fork? We are actively improving the code, and welcome new contributors. We have a presence on this forum and on github: https://github.com/htm-community/nupic.cpp

2 Likes

Sounds a bit misleading to me, does this also mean the computational aspects (not code) of HTM are secondary only?

In relation to this, does Numenta keep a private implementation of HTM that progresses with their new discoveries in Neuroscience?

I’m not sure what you mean. The computational aspects are part of the core theory.

No.

What i meant was that the tedious process of making a useful product and bringing it to market are secondary to numenta. Its the difference between computer science (research) and software engineering (application of research to real world problems).

Sorry for the confusion. I was replying to the previous reaponse somehow implied that Neuroscience is the primary study. Your answer is enough, I can understand now thanks.

To be clear, Numenta’s primary mission is to understand how intelligence works in the brain, and to implement testable theories in software. One might say that our primary study is neuroscience.

1 Like

I see thanks, this much clearer answer.

I just think that implementing something in software is inherently computational especially in the exotic domains such as Neuroscience.

2 Likes

3 posts were split to a new topic: Swarm intelligence

Deep networks abstract out the underlying concept to the point where your statement is true.

Numenta is building models of the biological systems; these models are based on biological research. This skews the focus to the biology - where it should be producing results that can be verified by observation in the biology.

Nothing in DL models can be observed in the biology.