How get SDR of word

Oh, I see what you are thinking. You are thinking of a setup something like this:

image

This would work, however may not be the most efficient setup (a lot of synapses in the second SP potential pool would be used infrequently – such as those which connect with letters at position 45).

My suggestion is set up more like this:

image

1 Like

what is position 45? can you explain?
yes. this is what i think. i think size of word is just maximum is 8. (beautifully).
so how about CAT. size just 3.
to have same input second SP. I put 5 sdr of space to CAT.to have 8 letter.
how do you think about it?

Ok, sure if you limit to words size eight or shorter (i.e. don’t support all words in English language) then your setup should work fine.

It is “S” in the word pneumonoultramicroscopicsilicovolcanoconiosis (longest word in English language)

my word may skip this. I just think easy work first.

1 Like

can you expain more @sheiser1 ? thanks

Another example, same idea:
The words "contender’’ and "pretender’’ have nearly the opposite meaning, so they don’t overlap in that way. They do overlap in the letter sequences they use, bother end in ‘tender’. This commonality would give them very similar representations by your method I believe. I’m just pointing out that the concepts of ‘contending’ and ‘pretending’ are not similar in meaning, though they are written similarly in the English language.

1 Like

Good point. I think there are some lessons to learn from grid cells used for position in a room – the representations are specific to the room. Understanding how that works could lead to an application here, whereby positions in the word are specific to the word. (another way to think of this is that location provides context to the letters, and the word provides context to the locations – a bi-directional feedback model)

1 Like

We do use similarity of form to carry out some of our semantics, so I don’t see this as a veto for the method. Besides, “nearly the opposite meaning” may not support similarity, but it does support a relationship at the very least.

2 Likes

I haven’t followed Paul’s method well enough maybe to push the idea… just brainstorming here… but I’d even say that if the model may form an understanding of “pre-” and “con-” prefixes as the true holders of the antinomies here, that would be a huge step towards a generalization ability.

2 Likes

Yeh, I brought up a few different unrelated approaches, sorry for jumping around. One thing I like about the concepts from HTM is there are so many ways to apply them to different problems.

1 Like

If I understand the problem correctly - if you are not concerned about semantics then you could possibly encode a word into an SDR by encoding each letter by their index. You can then create a union of these SDRs and feed it into SP.

Given a word ‘MAN’, a random SDR can be assigned to M&0, another to A&1, and another to N&2. If you were to use key-value storage you can assign the letter&index key to a unique SDR value then combine those SDRs into a union for the word.

The problem with this approach is there is no room for invariance. The SDR for ‘MAN’ would have a different SDR for ‘MAN’ and ‘WOMAN’ as M would have an index of 0 for ‘MAN’ and an index of 2 for ‘WOMAN’. So therefore the encoding would be different. However, you could probably do a syntax tree-based encoder, which might solve this problem as it will break down the the word into parts.

1 Like

I believe @Rodi is wanting the letter SDR to be a product of SP with topology of MNIST-like letter images. So a random SDR incorporating both letter and position might not apply to this case.

I suppose you could add a classifier to the mix to use this strategy though. Once you have classified a given letter after SP as an “M” for example, then you could create a separate random SDR for M&0, and so on.

Ah, fair enough.

When you union this, how would it be known that the position/location/index of the letter is associated with that letter? A union of CAT012 is ambiguous as it is not known what 0 is associated with, same with 1 or 2 - they could be associated with any letter. This is might by why it needs to be reencoded to a unique SDR that encodes the letter’s association with its index.

Please correct me if I’m missing something, as I feel I might :slight_smile:

image

I didn’t mean for this to depict a union of CAT012, but rather a union of C&0, A&1, and T&2. The population of cells that make up the 0, 1, and 2 (or 1, 2, 3 in my drawing) are a different population of cells than those which make up C, A, and T. The C, A, and T minicolumns contain cells which have distal connections with the other population of cells, and thus become predictive by activation of 0, 1, and 2 cells.

1 Like

Ah right, so this isn’t a union its an overlap (AND)? If it were a union in the illustration above then you’d have 5 cells (per step) in red rather than 3?

Sorry, my mistake - a union of overlaps. Each step is an overlap that get ‘flattened’ into a union. Yeah that makes sense now.

Yes, I should do a better depiction. Let me use orange column outline to depict winning columns, blue fill to depict predictive cells, red fill to depict active cells, and red fill with blue outline to depict predicted active cells. I think this might be a better visualization of the concept.

image

The union is just a poor man’s temporal pooling algorithm (probably good enough for this case). Also, it is worth noting that the predictive cells in the letters layer don’t have to match the active cells of the locations (that is just to make the concept easier to visualize). In practice, you’d probably need a lot more minicolumns for the letters than you would need cells to depict locations.

Say we have a detector network that detects for a “c”. How this detector comes into existence will be in different discussion. Cause the SDR selects what detector come
into being and where they will wire in as a self organizing map, SOM.
All activation of detectors NN are wired to a SDR bit.

All images are compressed before they a presented to a detector or moved to other
detector withing the brain.

A detector is not memory it is ring the bell when it is seen or a bit that goes high.
Or “I know it when i see it”.

To store the detection in memory the detector is forced to look a internal sketch
board that re generates compressed input. When the detector re activates it mean
mission complete the compress data, on the sketch board, can now be stored away.
So when you close your eye and view letter “C” the compressed data in the neural
software is un compress it to make it look real.
When a person set back and thinks of paths they will take the detects the detectors
will activate on everything as if they were really there.

This way a a complete record temporal loop or a temporal length of data can be
inputted into a NN detector all at once and then mapped to a SDR bit.

The compression and un compression of data is done by something like this:

All the other plumbing is being done by a GAN: