Do I need an SP between my word encodings and the TM?

Cheers for the fast reply Matt!

As a follow up question (might be a stupid one) to this as I’ve implemented my own wordSDR. Do you think it will improve the results to have a SP between my encoder and the TM?

Right now the setup is

Word -> Encoder -> (SDR) -> TM -> prediction

And thus I was thinking of changing to:

word -> encoder -> (SDR) -> SP -> TM

Any thoughts?

Great question, honestly I’ve heard this asked quite a bit over the years. My answer is always “try both, see which is better”. I think it depends heavily on the mechanics of the encoding, so YMMV.

Thank you! It seem to working like magic right now! But I’ll continue to experiment! :smiley:

I do have an idea of something you can try, but it’s not easy. I would only suggest it if you were not having success with your current methods. :wink:

I’m up for any idea, I’m currently enjoying the wave of success and happiness after spending loads of time on the encoding part.

Let me ask you a few questions first so I can better phrase the approach:

  • what HTM implementation are you using?
  • are your word encodings topological?

Well as of right now I’m just using the BacktrackingTM instead of setting up a network just to check if the encoding worked.

The way I encode each word is to use a GloVe representation of the entire data set and then select the positions of highest significance sort of for each word. So each word has a unique representation but some bits do overlap, even though they are not semantically similar.

Side note:
I was also thinking of looking at Random Indexing by Sahlgren to encode the words but haven’t tried that method yet.

I’m not familiar with the techniques you are using, but if they are creating topological encodings, you could try turning off global inhibition, which will allow the SP to better process topological input. This is expensive and it may not improve your results. Also, it is probably going to be hard to figure out how to do it. I hooked up NuPIC to process topological data once this video. I have all the SP params I used here somewhere (look for spParams).

1 Like

Cheers, will look definitely look into it! Appreciate the support!

Yeah, might not have explained that very well! :sweat_smile: But I found this report called “SHTM: A Neocortex-inspired Algorithm for One-Shot Text Generation” and used their method as an inspiration for my encoder. I couldn’t use cortical.io’s retina engine because of the nature of my data set.

1 Like