Deepmind DNC paper

Deepmind just published a very interesting paper: https://deepmind.com/blog/differentiable-neural-computers/

It seems like (mostly from the pictures) that memory part of the DNC is kinda like an SDR. Has anyone read the paper yet (I’m waiting for the paper version of Nature to arrive)?

Can we maybe shed some light on the commonalities and differences between the concepts in HTM and DNC?

2 Likes

The paper is out. I haven’t read it yet, and am not sure if I can share it to everyone in the forum. Anyway, I will get back to you if I find some commonalities between their new model and HTM.

2 Likes

It is publicly available now.

This tweet shares this link to the paper

Note that a pdf format is not publicly available, online reading only

3 Likes

A Cliffs Notes version of this paper would be nice to have, if anyone is willing to put one together.

1 Like

I don’t think there are any similarities between DNC and HTM. This new model is more like an extension of their Neural Turing Machine, that is, a neural network plus external memory. One novel thing that I found is the fact that the whole system is differentiable, so in principle it can learn how to access/store to and from memory depending on the input data. One thing that might resemble HTM theory is how memory is accessed. They use a similarity score by which if the “key” partially matches the content of a memory location, then that particular memory location is used. This reminded me a bit of overlap scores among SDRs. Other than that, I didn’t find any similarities with HTMs.

1 Like

Red the first part of the paper … smells like Prolog on top of NN :slight_smile:
The benefit being that the NN part builds the fact-db (aka memory) out of the data.
So it could probably solve the biggest problem of Prolog maintaining large fact-dbs.

And yes it sounds like a fact/rule memory rather that SDR-memory (fuzzy).

So it seems DNN movement is combining 80’s logic AI with 2010 data based NN AI.
Time still does not seem to be integral part of the system. Sparsity seems to be lacking too.