Microsoft Research Talk by Jeff and Subutai


See it on our website or on the Microsoft Research site.

How Can We Be So Dense? The Benefits of Using Highly Sparse Representations

Did anyone from Microsoft agree to collaborate with you guys?


I haven’t heard anything about that.


Thank you for sharing this very insightful presentation.
The ML part is quite new compared to previous videos.

My curiosity was aroused, in particular during the questions at the end.
Could someone elaborate on the analogies between HTM and CNNs ? And how does the new HTM theory englobe and generalize the Caspule Networks theory ?

To be more concrete, here is the transcript of Subutai’s answer concerning CNN ( at 1h27) :

CNN were originally inspired by biology. And so, you have your filters, feature detectors, followed by a pooling step.
That correspond to input coming into L4, going up to L2/3 and then up to the next level.

If you’ll count the number of synapses in a cortical column that match that model, it’s less than one percent.
It doesn’t match 99% of what is going on in our brain if you look at the individual connections, so all of this other complexity has to be incorporated

1 Like

CNNs (and the Neocognitron from 1980, a similar idea) are loosely inspired by neuroscience, specifically the simple/complex cells found in L4 and L2/3. This is a really tiny part of the full cortical column. HTM theory is a model of the entire cortical column (our progress on this was summarized by Jeff in the first part of the talk).

Capsules are quite interesting and definitely has relationships to some of our work. I described them earlier in this blog post.