A response to "Building machines that think and learn like humans"

I have thought a lot about this, and I’ve decided not to write this response.

I work in this field because I want to understand what makes us intelligent. It’s one of those frontiers of science where real progress is being made in our lifetimes. And when I write “this field” I mean brain science, not AI. (When people ask me what I do, I usually tell them “I study brains and make YouTube videos”.)

I’ve been studying non-biological (Bayesian) techniques and how they relate to HTM, and finally realized that this journey has already been taken by many people before me, and they either come out philosophically in one camp or the other:

  • biology
  • maths

The biology camp says:

Mathematical theories might tell us what a process is doing, but it cannot tell us how it does it. We must understand the low-level neuronal communication techniques and circuits before we will understand.

The maths camp says:

Without proofs, all biology explorations are shots in the dark. There’s no need to understand the cellular level details if we can approximate processes and functions in a generic, provable way.

If you’re a member of this forum, you’re probably in the biology camp. But there are many more people in the maths camp.

Because “AGI” is an unsolved problem and there is no proof that either approach will be fruitful, we must establish a belief that one way is better or more likely to produce the desired goal than the other. And we all know how hard beliefs are to change in people.

Anyway, that’s the long-winded reason I decided not to write this thing. I don’t think it is going to change anyone’s mind about their belief in what approach will result in AGI.

purity

1 Like