Race networks

In the thousands of synapses paper it was indicated that there was a race between neurons to fire first. At the moment I am reading some of Denis Cousineau’s papers about such matters.

The diagrams are at the end of this document, if you print it out:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.9507&rep=rep1&type=pdf

And the personal website is:
http://web5.uottawa.ca/www5/dcousineau/home/Others/PRNetwork/index.html

It seems very relevant to Numenta, but who am I to say.

Hi Sean,

I just want to respond because the whole rule about “treating others as you would like to be treated”, to me is an important one. I remember my feelings of “estrangement” when I sought to get feedback on Reddit.com, and all the naysayers against HTM Theory spit vitriolic hate in my direction… :stuck_out_tongue: You won’t find that here (a fact I’m very proud of), but you also may not find much interest in theories which deviate significantly from; or invoke minor aspects found in HTM Theory. Please don’t take this as meaning that people here are against learning about new or different things in general, but you may have to take into account that there’s a profound “sense” here as to the nature of the types of theories that people feel will result in the most profound steps forward?

Regarding this…

Inhibition or the “race” for internal neurons to represent the input is an important aspect of the biological paradigm, yes. You probably know what I’m going to say next, as that is but only one of many “traits” that need to come together to embody important features that distinguish HTM Theory (and neurobiology) from classic ML techniques - others being:

  • The use of Sparse Data Representations as the computational representation.
  • Enforced sparsity as a result of the “race” / inhibition you spoke of.
  • Synaptogenesis, or the ability to dynamically form connects based on realtime adaptation to the data.
  • Online learning - learns from the data itself, no pre-training necessary.
  • Robustness to “cell death”. The ability to use a totally new part of the network to learn the same data in the event of columns/cells becoming “ineffective” or “dying” - and comparatively quick recovery therein.
  • Robustness against noise.
  • Ability to be applicable to many different problem domains without prior preparation (just start introducing the new data).

These are just the ones I know about; but the point is that there are many important “features” of neurobiological emulation and “inhibition” / “input racing” is merely one of them. I did find the website interesting however. And I think it would be an interesting experiment to introduce each of these aspects to classical ML networks to see the nature of qualities yielded from their introduction? You may want to check out http://ogma.ai, as they are a group of individuals from this community who are investigating this very thing. A couple of the members of Ogma are: (@fergalbyrne, and Erik Laukien)

Anyway, just saying hi…

1 Like

Anyway evidence priority queues make a lot of sense. I guess there are a fair number of physical processes going on concurrently in the biological brain that are very difficult to disentangle.
Since there are brain waves obviously the Barkhausen criterion applies implying positive feedback exists as a reality just as a random example.
You end up being able to list out items from control theory, analog electronics, basic digital electronic (such as latches). Kinda interesting, that’s all.

1 Like