Non-binary connections

I’m working on my own version of a temporal pooler, and I wonder… how sacrilegious is the idea of a weighted synapse?

My idea for TP is to simultaneously train on the TP’s input as well as its own prior activity, with strongly weighted self-reinforcing connections. Does that vibe with the core principles of HTM theory?

Edit:spelling

1 Like

Idk if it explicitly violates them, but I wouldn’t let that deter you from trying anything.
The main things I would be sure and stick too are the concepts of localization and modularity.
So there’s no master or global anything.

Each region can only represent a certain slice of sensory space, and each column and cell has a limited receptive field.

A TP region should (I think) function very much like a usual sensory region. But its role and behavior can change by changing how it’s receptive field works. For instance:

  • monitoring sensory region(s) along with/instead of the raw sensory data

  • monitoring its input over longer time periods (like including active/winner/pred cells from before t-1)

2 Likes

Individual synapses can be considered as binary if you are only concerned with their connected status. However, in modeling the network we might also want to consider the amount of influence the firing of one neuron has on another. Numerically, this can be represented by a scalar, but also as a sum of unitary input bits. The latter interpretation would correspond to multiple synapse connections between the axon and dendrite arbors between two neurons. So, I think there is some plausible biological justification for using non binary weights to describe influence between neurons.

2 Likes

Cool!

I’ve built a few prototypes of TP’s.
Your idea sounds similar to what I ended up with.
I made a (20 minute) presentation where I explain one way to make a TP:

Although the presentation starts by talking about “grid cells”, I discuss the TP for the last few minutes.
I hope this helps.

3 Likes

I’ve come to the conclusion that my idea as originally formulated won’t work, at least not for low-level structures. Testing indicates that, unless I use prior knowledge of the beginning of a new sequence to reset the TM, the TP will eventually merge all sequence representations into one due to the self-reinforcement signal. I now think a simple spatial pooler-like object that updates every N iterations and reads the last N active cell SDRs (concatenated) is a better option. This is not a true sequence identifier–really just a temporal subsampler.

Perhaps at the top of the hierarchy there is room for some form of self-reinforcement sequence identification, with a TM reset signal coming from elsewhere in the stack (I.E. elsewhere in the brain). After all, the real brain is capable of observing a sequence, then telling itself “alright, that one is over, clear your expectations for the next one”. I suspect this to be a fairly high-level function.

2 Likes