Roadmap to Machine Intelligence

@subutai, in the HTM learning circle (HLC) call yesterday, we were discussing various neuron models. This led to discussions about your work on sparsity in CNN. That work was presented as part of a “Roadmap to Machine Intelligence” in Jeff’s YouTube recording of a NAISys presentation on “How the Brain Uses Reference Frames, Why AI Needs to do the Same”. The next step on that roadmap calls for “active dendrites” and we would like to know if this is your current focus - extending the sparse CNN by replacing the point neuron model with an active dendrite neuron model. Perhaps we can ask a more general question about where your work on CNN and sparsity is headed. Could you please give us some clues as to how your current research fits into the roadmap? Thanks!

5 Likes

Hi @markNZed - thanks for the question. Yes, that’s exactly right. We’re working on enhancing the point neuron models in deep learning with active dendrites. Sparsity is a key foundation for that.

In a nutshell, all the biological elements that are in the Thousand Brains Theory (unsupervised learning via prediction, reference frames, voting across cortical columns, sensorimotor representations, etc.) can play important roles. It’s really going towards a new paradigm. At the end of the day, it won’t look much like today’s deep learning at all!

7 Likes

In a somewhat related event, I see a presentation scheduled for next Wednesday CET 14h00 at NICE 2021 about your work. Will you be giving this presentation? It leads to the perhaps obvious question, can you apply something like the Whetstone sharpening for DNN to your sparse CNN? I wonder if the power efficiency of your CNN could be improved in this way? You will have a cheerleader in the crowd next Wednesday :wink:

1 Like

It was a great presentation. A couple of highlights for me are below. I was not aware of the level of dynamic change in brain structure - this does not seem to map well to HTM? The reduction in non-zero weights with a larger network is surprising and very cool. How did you arrive at the particular sparsity of weights - as it tuned based on accuracy?

1 Like

This looks like relevant research related to the transition to active dendrites “Here we introduce the Dendritic Gated Network (DGN), a variant of the Gated Linear Network which offers a biologically plausible alternative to backpropagation.” https://www.biorxiv.org/content/10.1101/2021.03.10.434756v1

2 Likes

Thanks, @markNZed!

It actually maps pretty well to HTM. In the SP and sequence memory we modeled the dynamic change in structure through changes in permanences.

Yes, I think this is really fundamental. I discussed all the gory details in a research meeting a couple of months ago:

The code and detailed results are available here (analyze_scaling.ipynb is the main notebook):

2 Likes

Thanks. Yes it is relevant - Karan discovered this too. They presented the ideas at Cosyne. The video of that is here.

1 Like

The audio quality of that talk is a filter for those not so interested :slight_smile: I got through it and it gives me a better idea of what it is all about, thanks. There was so much exciting work on dendrites at NICE Workshop. I guess you noticed it all. The multi-compartment models were one highlight for me https://flagship.kip.uni-heidelberg.de/jss/HBPm?m=displayPresentation&mI=209&mEID=7899

1 Like

That is a very cool talk and thanks again for making so much of this public. Dimensionality matters and go big :slight_smile:

I am confused about the changes in the permanences being equivalent to structural change - isn’t the structural change about changing connectivity rather than strength of connection?

A synapse is there or it is not. That is more of a binary thing.

1 Like

In HTM, the permanence just controls whether the synapse is active, not its weight. Thus it’s just a structural thing. We discussed it in our Neuron paper (see Fig 5). The same concept applies to the Spatial Pooler. Thus, in HTM it’s pretty much what @Bitking said.

2 Likes

@subutai I was reading The Spike and this reminded me of your work “Synaptic failure is exactly the same mechanism as DropConnect: it drops connections between neurons, at random, and temporarily. It adds deliberate noise to the brain exactly where you’d want it to prevent your brain from overfitting. To let it generalize.” Humphries, Mark. The Spike (p. 79). Princeton University Press. Kindle Edition.

3 Likes