Looking for research directions (Just started an HTM internship at Berkeley Labs)

Hi all, I’m a PhD student in applied math. I’ve done research in computational neuroscience and got interested in theories of brain computation. (I made an assembly calculus Julia package.)

I’ve been hired to work with Dipak Ghosal. His group has used HTM to predict web traffic patterns. The group is interested in understanding and developing the theory behind HTM in addition to working on applications relevant to the DOE.

I’ve been reading papers and trying to see where there is room to contribute. I’m not sure if this is correct but it almost looks like Numenta has moved on from HTM and is focused on other algorithms.

I’ve also seen papers suggesting that LSTM is better than HTM on stationary time series. This one seems quite comprehensive. These results, while disappointing, don’t stop me from wanting to study HTM. I can appreciate how HTM attempts to give a plausible framework for brain computation and I believe that is a very important task.

I was very impressed with the 2017 Spacial Pooler paper. The function of all components of the algorithm were explored and verified in detail. A lot of work was done to make sure that the algorithm was doing what the researchers thought it was doing. And it had lots of claims at biological plausibility.

I would love to hear your opinions about HTM compared to other AI or brain computation algorithms, directions that you are excited about and directions that you feel have been explored already. (Oh, and feel free to point me to some relevant threads.)

4 Likes

Given your background I assume you have looked into ART (from Grossberg). Why do you prefer HTM over ART ? I somewhat went the other direction, started out interested in HTM and then got interested in computational nueroscience and then found ART. Currently an ART based system is leading the Numenta anomaly benchmark (NAB).

I think you are right that Numenta has shifted focus from HTM to the thousand brain theory (Jeff’s latest book) and working with DNN (increasing sparsity and possibly modulation).

Numenta have a history of open sourcing and they seem to be working on something very different from HTM. I’m waiting for them to release something as I think there is zero chance of reverse engineering something that is not observable :slight_smile:

2 Likes

Thanks for the link.
Open access version here:

4 Likes

I actually heard of ART for the first time yesterday. I appreciate you pointing it out to me.

(And I should be clear I’m primary a mathematician who has done some computational neuro and then wanted to learn more about brain computation so I am not a neuroscientist by any means.)

Do you know if is ART more supported by the neuroscience community?

1 Like

Love this :joy: very excited to see what they release

1 Like

Hi,

An interesting and unexplored area is the discovery that dendrites can learn (in addition to synaptic learning). This new discovery showed that dendrites can respond to inputs (both synaptic inputs as well as back-propagated AP’s), and they can tune how responsive they are. Most interestingly, they found that dendrites can change their response properties in a matter of seconds, unlike synapses which can take minutes to hours to learn.

This offers a whole new dimension for learning and for storing information!
It could be possible to “unlearn” the information stored on a dendrite without removing any synapses, and then at a later time to “relearn” that information by simply re-enabling that dendrite. Or maybe a dendrite could become super sensitive to synaptic inputs, which would effectively lower threshold for detecting that input.

This mechanism might also alter how action potential back-propagate from the soma to the dendrites, which would allow this mechanism to influence the hebbian learning.

3 Likes

There are a number of people pursuing Grossberg’s approach, a good entry point is Yohan John’s YouTube https://www.youtube.com/channel/UCBURnAfm7VFJTLrkao0-t1g and Grossberg’s chef-d’œuvre Conscious Mind, Resonant Brain: How Each Brain Makes a Mind - Oxford Scholarship

1 Like
  • I incorporated a model of grid cells into the spatial pooler. The resulting model can also produce “object-cells” that respond an object regardless of which part of the object its looking at.
    Video Lecture of Kropff & Treves, 2008

  • I found and showed how to fix a bug in the spatial pooler algorithm: it does not control the total number of presynapses that a cell can have. Consequently some cells will form a presynapse with every available input, and some cells will form no synapses at all.
    Synapse Competition

  • An area of active research is attention/conscious access. I think that high-frequency bursts of APs are a special signal that indicates that the animal should pay extra attention to the information being transmitted. The apical dendrites of pyramidal neurons are tuned to respond to burst-firing and can cause their cell to emit burst-firing.
    Evidence for this theory: L5tt cells are attentional (paper summaries)
    A proof-of-concept model: A Model of Apical Dendrites

2 Likes

Thanks for this info. I’m excited to take a look!

Hi all, I’m a PhD student in applied math. I’ve done research in computational neuroscience and got interested in theories of brain computation. (I made an assembly calculus Julia package .)

@djpasseyjr I can’t tell from the code and the paper on Assembly Calculus, but do neurons have a binary activation or a float activation value? That is, are they like HTM neurons or are they like ANN neurons?

I wasn’t aware of Assembly Calculus, so thank you for providing that.

2 Likes

They have a binary activation. Here is the assembly calculus paper I used.

The group has some more recent stuff, but it involves inhibiting edges in the network and I felt weird about that since as far as I know, the brain doesn’t inhibit certain synaptic terminals and not others.

Assembly calculus has lots of problems, synapses can grow to an unbounded size and a brain network cannot retain very many SDRs (or assemblies) in memory. But it does offer some interesting ideas. I think the theory could benefit from cross pollination with HTM.

3 Likes

Thanks. Very interesting.

How do you start the firing pattern in any ensemble? Which nodes start? And is the firing pattern itself critical to getting/storing a recurrent pattern?
How many patterns converge? Does an ensemble always converge?

The authors typically use a random stimulus in the form of a vector of length n with values drawn from a binomial distribution with parameters k and p that are the same as the k-winners and edge probability in the ensemble.

If there are enough edges, you can also kick start firing with by choosing a random k initial neurons and letting their signals propagate.

The ensembles almost always converge but there are exceptions when two neurons cycle between on and off. If you define convergence to allow for short cycles, then they ‘always converge’.

The initial condition is not critical to which pattern emerges, but the final pattern does depend on the plasticity parameter.

1 Like

Thanks.

Just to check assumptions: with one existing (random) ensemble:

  1. will the same start sequence always generate the same converged pattern (yes, alternating counts) [ie. deterministic?]
  2. will different start sequences always generate the same converged pattern [ie. each ensemble only has one attractor]

Yes, it is deterministic. There are two pieces, 1) the initial neurons firing 2) the fixed, randomly initialized stimulus into the ensemble.

If you set 1) and set 2) to be zero, it takes longer to converge and there is more variation in the possible ending patterns. The underlying network of synapses creates a probability distribution, and some neurons are active more than others.

With the addition of an input signal, the probability of that each neuron will be active is fundamentally changed. It makes certain neurons more likely to activate, and biases the flow of activations towards those neurons.

A different random stimulus biases activations towards different neurons.

If you turn on hebbian weight updates and apply stimulus one until the ensemble converges, and then apply stimulus two until it converges, when you apply stimulus one again, the ensemble converges very quickly to the corresponding pattern, and then applying stimulus two again causes the ensemble to converge quickly to the pattern associated with stimulus 2.

1 Like

You might be interested in my script here. I was trying to find out how many unique patterns (assemblies) an ensemble (brain area) can store before the overlap between new patterns and stored ones gets too big.

1 Like