I’m trying to reconstruct a (possible) input based on the SDR it produces. My idea is doing something like this:
The on bits on the SDR represent active columns.
For each active column, I check which connected synapses it has with input bits.
I will reconstruct the input setting to 1 all those input bits which have a connected synapse with any of the active columns.
As far as I know, the SpatialPooler.getPermanence(column, permanences) gives the permanence value of a given column with each of the input bits. And I suppose that if a given column has a synapse with an input bit with a permanence value of SpatialPooler.getSynPermConnected() or more, that synapse is connected.
However, in all my tests, I haven’t ever seen a permanence value which is equal or bigger to that threshold (by default, 0.1). What am I doing wrong?
For example, if I take the hello_sp.py program, and I modify the Example.run() method so it becomes:
def run(self):
"""Run the spatial pooler with the input vector"""
print "-" * 80 + "Computing the SDR" + "-" * 80
#activeArray[column]=1 if column is active after spatial pooling
self.sp.compute(self.inputArray, True, self.activeArray)
print "A synapse needs a permanence of", self.sp.getSynPermConnected(), "in order to be connected."
for active_column_index in self.activeArray.nonzero()[0]:
permanences = np.zeros(self.sp.getNumInputs())
self.sp.getPermanence(active_column_index, permanences)
print "The column number", active_column_index, "is active.",
permanences[permanences < self.sp.getSynPermConnected()] = 0.0
print "It has", len(permanences.nonzero()[0]), "synapses with permanence over the threshold."
print self.activeArray.nonzero()
I can verify that NONE of the synapses in any of the active columns has a permanence bigger than the threshold. How can it be? What am I doing wrong? The biggest value I have seen so far is about 0.0078.
Thanks!
PS: I know that there are some other methods in SpatialPooler such as getConnectedSynapses or getConnectedCounts, but I don’t understand at all the values they return…
This will certainly happen if the input encodings don’t have enough semantic meaning, or if there is not enough temporal correlation within sequences. It is seeing randomness and not learning anything. You’ve created your own encoder, I would be interested in seeing what kind of encodings you’re creating.
But if the active columns don’t have any connected synapse… How did they become active?
I thought that upon creation, about half of the synapses where (randomly) connected to input bits, but that doesn’t seem to be the case. I haven’t ever seen a single column with a synapse with a permanence over the threshold…
They become active because the top % of columns are always selected to be active. There is something wrong with your setup. The first place I would look is your input encodings and SP parameters.
@ivansiiito I think a better title for this thread would be “Why isn’t the SP learning my input?” because I think that is the current problem. If you agree I’ll update the title.
I don’t know, it could be. I thought that HTM would learn anything. In some of my tests, I was feeding the same input over and over again (1000 times), and the active columns never had a single connected synapse Shouldn’t the SpatialPooler learn in this circumstance?
No, an HTM won’t work on any input. Encodings need to have consistent semantic meaning. That’s why I keep asking to see some encodings and your SP parameters.
My encoder is similar (I suppose) to the one that Cortical.IO used. I encode lemmas. The encoding of a lemma is constructed upon the appearances of such lemma in different snippets of text. So, lemmas that coappear have similar encodings.
In this demo, we use word fingerprints as if they had already been spatially pooled. We input the bits as if they were already representing mini-columns. You should try that.