Replicated some of "Going beyond the point neuron"

I replicated some of the results from “Going beyond the point neuron model” paper. I also tried extending ADNs to CNNs, and tested them on SplitCIFAR100 (similar to the continual learning dataset from the SI paper). You can see the paper here.


I could not get past this :frowning: McCulloch & Pitts did not create the point neuron, they invented the binary neuron.

1 Like

Yeah, I mention immediately after that Hebb postulated weighted connections, but you have a good point. The point neuron seems poorly defined in the literature. Some people say it is any neuron that abstracts away all the physical structure, but others seem to imply other meanings sometimes. Numenta’s “going beyond” paper attributed it to Lapique, which seems way too early. If you’re just calling the point neuron the modern one used in DL, then it’s Rosenblatt.

We could consider point neurons to have meaning in the context of artificial neural networks, so in this case the inventor of the point neuron happens with the inventions of ANN i.e. it was Rashevsky. This is a pet peeve of mine - so please bear with me - the history of ideas should be a key part of a good education and this was flagrantly missing from my university education.

For me the modern point neuron is formalized after the publication of McCulloch & Pitts (which was largely informed by work from Rachevsky’s lab because Pitts worked there) when Rachevsky and others developed the idea of the point neuron (with rational weights) as representative of populations of neurons in the brain. This was later “reinvented” by people like Rosenblatt (who seemed to have a simple 1-1 idea of point neuron to biological neuron). Technologists seem to often have little knowledge of the history in their own field and the reinvention of things seems a chronic condition.

Once we realize that the point neuron is an abstraction of a population of neurons then we realize that increasing the detail of the point neuron is actually breaking the abstraction. In the case of Numenta this creates all sorts of problems such as how to use SDRs in a hierarchical system. This might be a classic example how a lack of knowledge about the history of ideas can lead down blind alleys.

[edit] It might be worth pointing out that there is a longer history of point neurons in computational neuroscience than the backpropagation story. The myth of point neurons being invented in the beginning of the path toward DL ignores the early history of computational neuroscience where people like Grossberg developed alternative approaches that still outperform DL in important ways.


Yeah, maybe we can shortcut the conversation by just caring about fitting models of neurons to the contemporary understanding from fields like electrophysiology? We’ve learned a lot about that sort of thing in the past 100 years. I guess I’m tired of participating in discussions where everyone has different definitions of “point neuron”. I’d rather just withdraw any opinion about it’s origin, as I’m more interested in making accurate models of the real deal than following previously held notions and abstractions.

I do appreciate you taking the time to write your reply. I was not aware of several of your points! Cheers!


Yes, we assume the history of the field has nothing to teach us. But is it because we are taught that or because we know that. In any case good luck.