Jeff on Lex Fridman podcast

Just dropped today…

I really loved this interview.


Saw it drop on Youtube earlier, looking forward to watching it, also quite long for Lex at 2+ hours, seemed to have a good conversation going.


1 Like

I’ve been a follower of Lex’s for almost a year and this interview was my first exposure to Jeff and the HTM model. I was very intrigued by his ideas and hope to spend more time understanding them more in depth. I’m now working my way through HTM School and “On Intelligence” on Audible. Thank you to everyone for spending time to make these ideas so accessible.


Hi @mike-holcomb, that’s great glad to have you!

1 Like

Lex Fridman interviewed Jeff a second time…


In the interview released just a few days ago, Jeff mentions several times that Numenta is going to build a machine instantiation of cortical columns, as understood by the Thousand Brains Theory, for the purposes of AI, within a few years. Is there any information anywhere about Numenta’s specific plans or roadmap for this?

Hi @strangecosmos! Here’s Numenta’s roadmap to machine intelligence based on the core components of our Thousand Brains Theory. Both Jeff and Subutai have been presenting this roadmap at recent events to show our research agenda. Note that all these components are highly interdependent and equally important to achieving AGI.


Does anyone know an intuitive book which builds up on the biology and maths behind HTM and reference frames (Apart from the “the thousand brains theory”) all in one place - one package?

Thanks @clai The replacement of an ANN node with a single HTM neuron might be misleading. The “point neuron” is often assumed to represent a population of neurons rather than a single neuron. So there would be a large number of HTM neurons for each point neuron in this case?

Isn’t it the other way around? Isn’t the typical argument for “classical” neural networks that all the complexities of biological brains can be simulated by enough point neurons? By that reasoning a biological neuron with 5000+ synapses should be simulated by several thousand point neurons.

In general I think most people today assume the point model is modeling a single neuron. After the paper of McCulloch & Pitts (around 1943 I think) showed the effectiveness of a binary neuron model, this raised the question of why previous dynamical system models were so good at modelling behavior. The answer, first presented by Rashevsky’s group and later used by researchers such as Grossberg was to consider the differential equations used to describe a “unit” to be potentially mapped to a single neuron or a population of neurons. Effectively when you have a large number of neurons correlating over time then you can model the system with continuous (i.e. differentiable) equations. The point neuron typically has a range of output values and is not binary, this can be interpreted as a “rate encoding” if you want to imagine it is modelling a single neuron, or a population encoding if you want to imagine it is modelling the distribution of activity in a population.

There are a number of “functions” you can get out of a point neuron type model that are suprisingly complicated. For example, Grossberg’s simple cell, bipole cell, complex cell, opponent channels, instar, outstar, avalanche, command cell, gated dipole…


Can those functions model predictive states of biological neurons due to dendritic spikes, or the inhibitory logic necessary for spatial pooling?

In Grossberg’s models the prediction is active (the output of a unit) but requires more complex micro-architectures that combine the various functions, for example, CogEM. Spatial pooling is part of what ART does and again that is a combination of functions. The point neuron as a population coding is more abstract than dendritic spikes - I think the assumption is that functions like dendritic spikes are how a population code might be generated.

1 Like