Resize SDR?

Do you have some simpler way to resize up/down SDR ? w/o using the Spatial Pooler.

I was thinking of simple ratio f.e. 2048 => 1000 = 0.488, so :

src-bit 1 :  1 * .488 = 0.488  => bit 1
src-bit 2 :  2 * .488 = 0.976 => bit 1  .... src-bit1 || src-bit2 
src-bit 3 :  3 * .488 = 1.464 => bit 2

similar for upsizing … but probably intead of || , should use && ?

any better ideas ?

I have never thought about doing this. I’m curious why you cannot just update the encoder to produce different size SDRs?

i’m playing with SDRs algebra (plus I’ve done simple Encoders&SP and pipe processing) i.e. I don’t necessarily generate the SDRs trough encoder-SP, but randomly instantiate them and build hierarchical structures …

   x = Lexicon(2000, sparsity=0.05)
   x.az()

    #BIND
    In [689]: ab = x.a * x.b

In [701]: x.a.count()
Out[701]: 100L

In [702]: ab.count()
Out[702]: 100L

In [703]: x.b.count()
Out[703]: 100L

    In [690]: ab
    Out[690]: 0000000000000000000000010000000000000100000000000000000000000000100000000000000000000000000000001000...


    #Overlap
    In [691]: ab / x.b
    Out[691]: 47

    #Similarity
    In [692]: ab // x.b
    Out[692]: 0.470

    In [693]: x.best_match((ab - x.a))
    Out[693]: 'b'  

    In [694]: x.a.count()
    Out[694]: 100L

    In [695]: x.a.size
    Out[695]: 2000

    In [696]: denser = sdp.rand(2000,sparsity=0.1)

    In [697]: x.a / denser
    Out[697]: 13

    In [698]: (x.a * denser) / x.a
    Out[698]: 39

    In [699]: (x.a * denser) / ab
    Out[699]: 21

    In [700]: (x.a * x.c) / x.c
    Out[700]: 55

   #Piping ...
   np.random.randint(0,100,10)  % enc.pencode % sp.ptrain
   np.random.randint(0,100,10)  % enc.pencode % sp.ppredict  % ... do something ...

Plan to include SDR expressions with piping … then you can declare them as functions and chain them together to do more complex processing …

It is called Vector Symbolic Architecture …

I did similar thing with Kanerva binary hyper-vectors … 10000bits, 50% sparsity

http://igrok.site/bi/TOC.html

even wrote simple Prolog interpreter on top of it :wink:

3 Likes

I will pop my head in here (BTW, a long time follower of HTM). I just had a quick skim of your site (http://igrok.site/bi/TOC.html) and it seems we perhaps share some ideas, though our solutions are quite different. I too have a language that manipluates SDR’s, called the Semantic DB (in hindsight not the best name, but way too late to change it now), that I have been tinkering with on and off for quite some time. The language is built around everything is either a superposition (or a sequence of superpositions) or an operator that modifies the given superposition. Superpositions are, if you look at them the right way, almost identical to HTM SDR’s. One major difference is that superpositions have float coefficients, rather than just binary. Whether that is biologically plausible is a debate for another time and someone else, but that feature is certainly useful, and there are plenty of things you can’t do without it. The superposition + operator model sounds simple, but unfortunately the language ended up being quite techinical to use in practice. I’ve tried a few times to explain my work, but not to a great deal of success. BTW, I’m not aiming for full bio plausibility like Numenta/HTM, I was aiming more towards a mathematical notation that can represent brain like things.

Anyway, if anyone is interested in taking a look here are some links:

The code is here (installing should be fairly straight forward, uses python 3):

Perhaps the best tutorial I have is that describing the family-relations.sw file:

Here is the family-relations.sw file, though without any background it is not going to make any sense!

Here is the usage info for our operators:
http://semantic-db.org/docs/usage/

4 Likes

interesting … you may find this good : https://dtai.cs.kuleuven.be/problog/

how would your prj differ from probLog.

So in general it is Knowledge DB ? and the underlining implementation is SDR or Matrix ?

Do you paper on the underling theory of what you do ?

Thanks for the link. ProbLog is built around logic statements, while SDB is a type of triple store. So, in general ProbLog examples are not going to translate cleanly at all, but we can approximate the example given on the problog page: Here is the ProbLog we are trying to reproduce:

0.3::stress(X) :- person(X).
0.2::influences(X,Y) :- person(X), person(Y).

smokes(X) :- stress(X).
smokes(X) :- friend(X,Y), influences(Y,X), smokes(Y).

0.4::asthma(X) :- smokes(X).

person(angelika).
person(joris).
person(jonas).
person(dimitar).

friend(joris,jonas).
friend(joris,angelika).
friend(joris,dimitar).
friend(angelika,jonas).

Let’s approximately (I couldn’t find a way to do it exactly) do that in SDB:
Also, assumes I understod the ProbLog correctly!

-- first define our people:
-- by default, an object is not a person:
is-a-person |*> #=> |no>

-- list objects that are actually people:
is-a-person |angelika> => |yes>
is-a-person |joris> => |yes>
is-a-person |jonas> => |yes>
is-a-person |dimitar> => |yes>

-- learn the set of friends:
friend |joris> +=> |jonas>
friend |joris> +=> |angelika>
friend |joris> +=> |dimitar>
friend |angelika> +=> |jonas>

-- if a person, then 30% chance of being stressed:
is-stressed |*> #=> 0.3 is-a-person |_self>

-- if a person smokes, then 40% chance of them having asthma:
has-asthma |*> #=> 0.4 smokes |_self>

-- joris is 20% influenced by dimitar:
influences |joris> => 0.2|dimitar>

-- dimitar smokes:
smokes |dimitar> => |yes>

-- define our general smokes operator:
smokes |*> #=> is-a-person intersection(such-that[smokes] friend |_self>, influences |_self>)

Now ask some simple questions:

sa: smokes |dimitar>
|yes>

sa: has-asthma |dimitar>
0.4|yes>

sa: smokes |joris>
0.2|yes>

sa: has-asthma |joris>
0.08|yes>

sa: smokes |jonas>
|no>

sa: has-asthma |jonas>
0.4|no>

Finally, we store the data as superpositions, not full matrices. In general superpositions are too sparse for full matrices to be practical.

Sorry, I don’t have a paper on the theory. I wouldn’t even know how to start writing one, to be frank.

1 Like

That’s the best starting point, I think. I haven’t thought of anything better yet, either.

1 Like

what I did is “accumulate” the 1s counts, then pick first the bits that have the highest count . (especially when sizing down there is bigger count of 1bits)

 @staticmethod
    def resize(x,new_size):
            rv = SDP(new_size)
            size = len(x)
            counts = np.zeros(new_size, dtype=np.uint8)
            ratio = size/float(new_size)
            #new sparsities
            sparsity_cnt = int(x.count()/float(x.size) * new_size )

            for i in xrange(size) :
                    ix = int(i/ratio)
                    if ix > new_size-1 : ix = new_size-1
                    if ix < 0                : ix = 0
                    counts[ix] += x[i]

            #pick the best
            sdp.set_by_ixs(rv, np.argsort(counts)[::-1][:sparsity_cnt])
            return rv

Awesome, I’m excited to learn more about this, thanks!

I’m eager to hear any feedback!