What are the (commercial) applications of our knowledge of the 2/3 layer of the neocortex?

Jeff Hawkins, in his presentation Principles of Hierarchical Temporal Memory (HTM): Foundations of Machine Intelligence, states that “our” (I suppose, of us, humans) knowledge of how the 2/3 layer of the neocortex works “has been extensively tested over years” and that this knowledge has been adopted and integrated into products and that “this works really well in commercial settings”.

I would appreciate seeing a list of successful (commercial and non) applications of our knowledge of how the 2/3 layer of the neocortex works, in particular, applications of the HTM theory (with respect to this knowledge of the 2/3 layer of the neocortex).

If it’s been “extensively tested”, then there must be tons of successful applications. I know of https://www.cortical.io/, which, I guess, uses some prediction/inference capabilities based on the HTM. But what parts of Cortical.io exactly test, verify or implement this knowledge of the 2/3 layer of the neocortex?

Note: I am interested in serious and thus useful (and possibly commercial) applications. I am not interested in experimental applications that are not useful to anyone (except maybe for educational purposes). Also, I am not interested in “general applications” of the HTM, but of this specific knowledge of the 2/3 layer of the neocortex.

1 Like

You’ve probably already seen these two since they tend to be on the slides that accompany the presentations you are referring to, but to start the list with the low-hanging fruit:

  1. GROK (for detecting server anomalies)
  2. HTM for Stocks (for detecting stock volume anomalies)

Yes, I saw them. Please, read my question/post again. I would like to know about useful applications of “our” knowledge of the 2/3 layer of the neocortex, not of the whole HTM theory. If these are those type of applications, then, please, give at least one argument in favour of that.

Do you have a specific HTM presentation video that you are basing your question on? Note that Numenta’s theory for layer 2/3 has changed relatively recently since many of the historic HTM presentation videos were originally published (historically, this was the seat for temporal memory, while now the theory places object representations in L2/3a, and temporal memory in L3b).

Since the knowledge of the 2/3 layer of the neocortex that is referred to in most of the historic presentations (at least the ones that I have watched) is specifically placing temporal memory there, and anomaly detection is an application of that functionality, then I would argue the above applications I listed are good examples.

1 Like

There’s a link to the presentation I am referring to in my original post.

1 Like

Yes, that was published back in 2014, which was before the current changes in theory that have come from SMI research.

Are you looking for commercial/noncommercial applications that demonstrate the current understanding of L2/3, or are you looking for the applications that Jeff was referring to in that video?

2 Likes

Apparently, according to you, our understanding of the L2/3a has changed. You said that the functions that were previously thought to be performed in L2/3a are now thought to be performed in layer 3b. I was originally interested in applications regarding level 2/3 of the neocortex as presented in the video. Given that our knowledge of the neocortex has changed, it doesn’t make much sense to continue this discussion.

1 Like

If you haven’t seen it yet, this video hits on a lot of the changing areas of the theory. I expect the theory will go through a few more iterations before it is finalized, and wouldn’t expect commercial applications specifically meant to model the new theory until some time after the theory is more solid.

That said, even what has been theorized so far is quite useful in commercial applications. For my job, I personally have used the concept of object representations for a NLP chat interface. Expect many useful applications to be possible as the theory evolves.

4 Likes

I have just realized that Jeff Hawkins actually mentions GROK and HTM for Stocks as applications of high-order sequence predictions later in that same presentation.

Anyway, 5 applications aren’t really much. He speaks as if there are hundred or thousand of applications.

1 Like

My sense when I listen to those presentations is that he is speaking to more than just the handful of commercial applications that Numenta has personally been involved with, but also to the numerous applications that the theory has inspired.

Take a look at this thread for example, which list 30 or so implementations of simply the HTM algorithms (one of those being temporal memory, which we are talking about here). The folks who developed those implementations no doubt used HTM for other things.

Just from casual reading of threads on the forum over the last couple of years, there are definitely hundreds of projects related to temporal memory out there (and anomaly detection, which is a practical use of it). Of course, I couldn’t quantify how many of those might be considered to work “really well” :wink:

Also, I think this speaks volumes:
image

4 Likes

If I had some Idea why you are asking this I could size up if my expertise is up to the task.

1 Like

I’m probably not the right person to answer your question, but I think what Jeff is saying is that the ideas have been extensively tested in a research setting over a long period of time (decades, if you count the pre-Numenta work as outlined in On Intelligence). In other words, the theory tends to be supported by existing and emerging evidence from neuroscience rather than be contradicted by it.

And, for the small number of commercial applications that it is currently suited to, it works well. I don’t think there is much of a market beyond what you’ve already outlined, but happy to be corrected.

There have been some excellent discussions here in the past (in particular this one) about why HTM isn’t mainstream yet. You’ll see there aren’t any illusions from the community that HTM has limited applications in its current state, and even nupic itself is clearly labeled as a research vehicle rather than a commercial application.

2 Likes

Something to keep in mind is that it typically takes at least 30 years for an invention to reach consumer markets, unless it’s a simple or truly game-changing invention like electric lights or sliced bread. For example we’ve had self driving cars for at least 30 years, but they still don’t work well enough and are still too expensive to mass produce. Another example is the steam engine which Watt invented circa 1781, but steamboats didn’t become prevalent in America until around 1815.

I think that the kinds of serious applications you’re hoping to see don’t exist yet. Deep learning on the other hand has had at least 30 years of research and development. Many of the technologies developed in pursuit of deep learning have been in use for quite some time now. In recent news, deep learning now works for speech recognition, image recognition, image generation, and game playing (chess & go). For these applications, deep learning works today.

For me, the reason to studying neuroscience is not to make an AI for any one application, but rather to make an AI which learns and can be reasoned with. For example: deep learning has created applications which make medical diagnoses, but HTM’s might someday read a medical textbook.

4 Likes

I think @nbro is really looking for clarification on Jeff’s comment:

Keeping in mind that Jeff is specifically referring to the commercial application of temporal memory (believed to be the function of neocortex layer 2/3 at the time of the comment), I tend to agree. HTM is very good at anomaly detection in streaming data (which is a practical application of temporal memory). I myself have used it for my job and agree with Jeff that it works really well.

4 Likes

An apt comparison (I think) is to early mobile computing. As Jeff points out the early mobile devices couldn’t perform key applications like spreadsheet/word processing and Internet, but he still believed that the mobile paradigm was the future despite its then-limited scope.

As of now the bonafide applications of HTM are mostly contained (to my knowledge) to certain problem classes, namely anomaly detection&prediction on temporal data streams and recently 3D object recognition with the new SMI theory.

What really excites me about the HTM approach at large is the paradigm shift embodied by its core structures (namely SDR’s of pyramidal neurons in sequence learning). To me this makes HTM systems uniquely ‘deep’ in their learning capacity in ways that very broadly applied, well published and competition-winning methods like DNN’s, CNN’s and RNN’s are not.

2 Likes