Htm vs spiking neural network

Hi all, i am new to this concept and also excited to learn this new theory,can anyone explain difference between spiking neural networks and htm.

2 Likes

tl;dr - Spiking Neural Networks are lower level and are more accurate compared to HTM at simulating real neurons. But also a lot more computationally expensive.

In my opinion, on a higher level. Spiking Neural Networks simulates how every neurons works, connectes and responses accurately at the electric level in every moment (ex: every 1us). And the learning is generally done using the Spike-Timing-Dependent-Plasicity method. While the simulation is accurate and delivers promising results. SNN is also very computationally expensive that you need a specially designed super computer to run a miniature brain in real-time. HTM on the other hand attempts to model the Neocortex as a whole and ingoring the unimportant properties of neurons. HTM is a less accurate simulation of the brain, but also a lot faster. We hope that the model HTM builds will eventually be enough to create intelligence.

Let us know if you what to know the difference at a lower level. It’s going to be a long post! :stuck_out_tongue_winking_eye:

8 Likes

Something I ignored, when I worked with SNN was, that dendrites play an active part in the signal processing (are non-linear). SNN equations have been developed by building a model of the cell membrane of a neuron. Therefore they can model very accurately, how the voltage and other parameters in the neuron change.

There are two different approaches, how SNN can be used in models:

  • On one hand, people try to build very accurate and complex models of neurons, where they simulate the whole structure (including the branching of dendrites) of the neuron. This approach allows to have a very accurate models of neurons. The price is, that these models are very very computationally expensive!

  • Another approach is to simulate just a single neuron (speaking in terms of the equations: just a single membrane point for each neuron). This is of course much less computationally expensive, however the possibility to integrate the non-linearity of dendrites is not possible any more. For these type of SNN there exist a lot of different hardware accelerators (like SpiNNaker, introduced in the post before).

It might be enough for some research to just simulate a few neurons with high accuracy, or to simulate much more neurons at a lower accuracy. But because we do not have unlimited computational power, we must build our models so that only necessary concepts are built into the algorithm. But the question is, what is a necessary concept?

HTM does not try to model every ion-channel of the cell membrane and so can not give you information about the voltage of cell. So HTM ignores some biological implementation details to save computational effort. But HTM is aware of important concepts and tries for example to model the non-linearity of dendrites.

10 Likes

thanks for the reply yes, i want to know the diffference at lower level and also i want to know about the similarity between the snn and htm, also which one exactly mimic the brain structure and functionality.

1 Like

I think that SNN and HTM do different things; they work at different levels of representation.

SNN is more like sub-atomic theory, very correct with very little application in day-to-day engineering.

Common physics like Newtons laws of motion tell us very little about sub-atomic theory but are very helpful in building bridges and making airplanes that fly. HTM is more about this level of representation.

HTM does ignore some of what cells do as it is not considered key features of the computation that is performed. HTM captures certain useful subsets of cell behavior in a way that allows someone to test ideas about how the brain might be working.

SNN may tell us a great deal about how a small group of cells interact but is currently not useful for directly building larger models of brain behavior. The computer resources required are simply far beyond the state of the art. Even with the simplifications that models like HTM offer we are barely able to model a few square millimeters of brain tissue. SNN computing would require something like two order of magnitude of greater computing power to do the same task.

SNN does provide valuable clues on general guidelines for constructing these larger models. As we learn new things about the lower levels of how the cells work this information is used to adjust the way that higher level models work.

7 Likes

thanks for the reply

1 Like

There are some spiking neuron models that are quite inexpensive computationally, iirc.

I also think also alternatively a cellular automata like spiking model might be functionally adequate whilst being relatively inexpensive too.

2 Likes

I concur. I am a total beginner at HTM. I am First reading as much as I can about HTM before throwing myself into coding. There is no doubt HTM is a great idea. I wouldn’t be here if I thought it wasn’t. Interestingly, it is neither DL, nor a classical biophysical (plausible) model. But I am still in a struggle thinking if not using spike neurons is really the way to go. On that respect, I think Neural Engineering Framework is ahead. But HTM seems conceptually a more complete theory. Well… It would be great finding a paper comparing spike NNs with HTM over the exact same task. Does anyone suggest a paper or text about it?

1 Like

I don’t have a comparison paper but I’d highly recommend you check out HTM School if you haven’t already:

Once you really understand the mechanisms of HTM I think you’ll easily see what separates it.

3 Likes

I’ve been going through and learning more about SNNs, and think that HTM can be made to run as an integrated (as in, all time steps at once rather than split out over time) SNN… all that’s required is to encode data in a temporal manner, as if we were going to feed it into an SNN.

The approach taken by Professor Chris Eliasmith, who is trying to embed SNNs within deep learning, basically forced MNIST input into a time-like encoding, which is then fed a bit at a time into the SNN module (LMUs).

I realized that the output of the transformation he’s doing looks a lot like the input of an HTM system. The only difference is that HTM is looking at all timesteps of such an encoding at once, rather than splitting it out over multiple timesteps.

I know that for SNNs, the decay between the “on” bits, and cells learning the timing of them is what allows them to distinguish themselves from each other and specialize, but I’d intuit that if those cell firing timings were plotted out in one-bit timesteps, their connections into that plot would look about the same as SP minicolumns which have already specialized into learning their bit connections with an HTM input space. In other words, an HTM minicolumn is the integral equivalent of an SNN cell… the further advantage then of HTM is the TM aspect, which is able to chain sequences of different input over time, perhaps more easily even than SNNs.

The other implication of this for HTM enthusiasts and researchers alike, is that we can take a look at how our SNN practitioner fellows are encoding data for their problems, and borrow their approaches for produce encoders for HTM.

One example is the below page, and how they execute the transformation of MNIST data into timed firing inputs.
https://www.nengo.ai/nengo-dl/examples/lmu.html

If it hasn’t already been done, we should keep exploring how SNNs have tackled their data encoding challenges and borrow what works, such as for sound.

3 Likes

I know this is an old thread but some intriguing dots might be connected here.

First dot is that paper from 2003 which claims they were able to simulate in real time tens of thousands of neurons on an “old desktop pc” at ms. resolution. And from picture comparing their neuron model output with a rat neuron, their algorithm seems pretty accurate.

This makes me wonder if anybody tried to test HTM abstractions and assumptions on a simulated patch of few dozen or hundreds of neuron-made mini columns?

If such a simulation is graspable it could be very useful not only to compare HTM against a biologically plausible model but also to find out whether the biological plausible model does provide new insights into other ways it can be abstracted/simplified up.

The second (and probably most) interesting thing is the 1M cores machine capable of simulating 0.5G spiking neurons.

You may ask, “but this is a very expensive spiking neural network hardware how is that useful to HTM?”.

Well, it was built with simulating spiking neurons in mind, but under the hood it is just a grid of 1 million 32 bit ARM micro controllers programmable in C with massively parallel, very fast small package routing. And they are opened for any project proposals, their page mentions it was used to implement multi layer perceptrons or for heat transfer simulations.

So maybe, if HTM abstractions would be compiled into that architecturer, it would provide 1-2 orders of magnitude speedup over the spiking neuron implementation, getting much closer to a full human cortex simulator?

They seem opened to test any interesting ideas on their amazing hardware.

To access SpiNNaker this way you will need to join the HBP Community and make a lightweight case for the use you require. Thereafter it is all free up to your allocated resource quota!

1 Like

One issue that would need to be contended with to compare HTM with this neuron model, is the fact that the model appears to be focused entirely on action potentials, whereas a core concept of HTM is the concept of a predictive state (hypothesizing that the majority of a neurons input synapses are not involved in generating an action potential).

Do you have some thoughts on how one would actually set up a comparison between HTM and this neuron model?

1 Like

Honestly I have no idea how such a test would look. The author of the paper ran the blue brain simulation in 2005 and published a comprehensive book on all about modelling different (if not all) types of neurons.

2 Likes