Nooby questions for Implementation

I was making my Temporal Memory algorithm a while back, but work decided to eat a lot of my time, but I’m back at it again. I like to program these algorithms myself to better understand, I’m just having some difficulty with a few things following the BAMI book.

the " function activatePredictedColumn(column)" seems to add all predicting cells to winners, and reinforces their segments connections. This means multiple cells in a column can predict the exact same thing in the exact same context. This is wasteful, no?

growSynapses() Looks like it continuously grows new synapses and will eventually consume all memory, unless createNewSynapse removes below-threshold synapses. is this the case?

It also appears that growNewSegment() could potentially be called a lot, and there could be tonnes of segments, consuming all memory as well.

Is there perhaps a better place to look at pseudocode for temporal memory, because I find this BAMI book annoying to build from. Maybe the problem is with me.

Any help is appreciated!

3 Likes

Short answer: SDR.
Most, if not all of the problems you’ve mentioned is never serious due to the sparse nature of SDRs.
And I guess the fact that a cell subsamples to predict will help too.

1 Like

Yes, but it is not described in the book. HTM continuously removes the weak distal connections.

1 Like

That pseudocode does the absolute bare minimum necessary to get basic working HTM algorithms, and does not pay attention to optimization. It sounds like you may be looking for less pseudo-y code, here are links to those functions in production:

1 Like

Thanks for that and apologies for bumping an old thread, but I’d rather continue the discussion here rather than make a new thread. So with the community code, it looks like when a column has multiple predicted cells, it actually strengthens all of their predictive segments.

This seems odd. This means that multiple cells/segments will fire in the exact same contexts. is this good? perhaps the need to be active in different contexts will inevitably reduce the amount of similar-firing segments. Am I on the right track?

It has to be done like that because there’s no way to know which context is correct for the moment.

2 Likes

A few questions in regards to the htm.core. I didn’t see a synapse permanence cap in the code. it looks like the permanence increases infinitely? or is it capped at 1.0? I may have missed this

Also since I’m at a loss of what default values to use for thresholds/permanenceIncreases etc, I’ve opted to use the defaults in the htm.core code. I’m wondering if these defaults are reliant on the assumption of some number of columns? if i use less than 2048 columns would you recommend tweaking some of the other params? I’m just not sure what the implications of messing with these variables are.