# Whitepaper: Avoid extra connections

Hello everybody,

I have a question about a bullet point in the WhitePaper (p. 27):

1. Avoid extra connections
If we aren’t careful, a column could form a large number of valid synapses. It would then respond strongly to many different unrelated input patterns. Different subsets of the synapses would respond to different patterns. To avoid this problem, we decrement the permanence value of any synapse that isn’t currently contributing to a winning column. By making sure non-contributing synapses are sufficiently penalized, we guarantee a column represents a limited number input patterns, sometimes only one.

What does this exaxtly mean?

before reading this, I thought, that only the permanence values of synapses on the dendrite segments of ACTIVE mini-columns get changed.
A cite from

The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding

summs this well:

For each active SP mini-column, we reinforce active input connections by increasing the synaptic permanence by p + , and punish inactive connections by decreasing the synaptic
permanence by p − .

I understand this bullet point like this:
besides updating synaptic permanence for ACTIVE mini-columns, we do another thing:
In every time step ALL the synaptic permanence of INACTIVE mini-columns will be decreased.

Do I understand this right??
I would really appreciate some clarification.

Helena

The way I read this (and how I also implemented it) is that we are not doing anything to inactive minicolumns. Instead, we are decrementing the permanence of any inactive synapses on the active minicolumns. This essentially makes an active minicolumn slightly forget something it has learned before when that thing doesn’t align with the current pattern it is learning. This allows minicolumns to specialize, as well as have the flexibility to adapt to shifting inputs over time.

1 Like

so it will only update active columns, for each active column it will increase permanences of the synapses connected to inputs that are active, and decrease permanences of the synapses that are connected to inputs that are not active

1 Like

Hey, Thanks for the fast reply

cool… this was exactly how I understood the learning algorithm before.
Thanks for the confirmation.

The brain grows and shrinks connections dynamically every day.
There is more than simple linear forgetting going on.

I imagine that there is a floor where you don’t completely drop a connection so it trains back up again very fast.

Ok I see what you mean… but do we implement this in HTM Systems?
So do we have a shrinking of unused synapses every time step?

HTM does not do this. It emulates a significant part of the cortical column operation but certainly does not copy all of what goes on in the cortex.

I expect that as time goes on Numenta will add more of what the cortex does.

1 Like

Congrats @helena_Thielen you’ve entered the world of theory.

5 Likes

I’ve encountered this issue before, where the spatial pooler forms an excessive number of synapses. I found that Numenta’s solution could be inadequate for controlling the number of synapses. I then further analyzed the problem and proposed a alternative solution to it.

3 Likes

haha thanks Matt =D

1 Like