Triadic Memory — A Fundamental Algorithm for Cognitive Computing

This graph shows the number of ‘extra answers’ you get back having stored an x,y,z triple and then queried x,y,_ to retrieve z.

Example:

  • store(x,y,z) with z being [ 1,2,3,4,5,6,7,8,9,10 ]
  • query(x,y,_) and get back [ 1,2,3,4,5,6,7,8,9,10,#44#]
    represents an extra answer of 1.
  • query(x,y,_) and get back [ 1,2,3,4,5,6,7,8,9,10,#44#,#71#]
    represents an extra answer of 2.

This was based on N=1000 and P=10. The vectors added are all unique, with no repeating elements.

The y axis shows then number of extra values returned (we’re expecting 0 for a perfect recall).

The x axis shows how many triples have been stored. The triples are all stitched: abc, bcd, cde, def.

extra_answers

NOTE: these results are for python implementation only

Here is another extended run version - also with negative values given for the number of elements missing from z.

  • store(x,y,z) with z being [ 1,2,3,4,5,6,7,8,9,#10# ]
  • query(x,y,_) and get back [ 1,2,3,4,5,6,7,8,9]
    represents an extra answer of -1, since it is missing a stored value.

The y axis will minimize at -10 because then you have lost all the elements from z.

extra_answers

NOTE: these results are for python implementation only

Here you go

1 Like

@POv Peter, the code doesn’t seem to handle values of zero in the vector? Is this an expected constraint?

e.g.
0 3 5 7 8 9 == invalid
1 3 5 7 8 9 == fine

Yes, in this convention the indices of an SDR run from 1 to N.

1 Like

I worked out what the problem with that test is. This is a P=10 run with only 8 element vectors being used.
If you run it with P=8 and the same digits, then the answers are correct.

e.g.

39 54 93 200 205 211 232 248, 18 58 71 109 168 205 211 255, 46 49 62 71 125 155 167 216}
39 54 93 200 205 211 232 248, 18 58 71 109 168 205 211 255,_}
46 49 62 71 125 155 167 216 
18 58 71 109 168 205 211 255, 46 49 62 71 125 155 167 216, 55 102 117 162 171 192 199 208}
39 54 93 200 205 211 232 248, 18 58 71 109 168 205 211 255,_}
46 49 62 71 125 155 167 216 

1 Like

I built the same test harness around the C implementation, as the one for python.
This shows significantly better results.

The axes and criteria are the same as further above.

extra_answers_C_SEQ499997_N1000_P10

There must be important implementation differences between the C and Python versions.

Zooming in slightly you can see a usable zone with perfect recall (the zero line).
Negative scores mean you are losing elements from z and positive means you are getting extra (incorrect) elements.

extra_answers_C_SEQ99997_N1000_P10

Obvious any run produces a different random vector set so performance will vary slightly.

1 Like

Yep, I was rushing with the python. I’ll test the … latest one.

Ok, now the python version is tested with 1M random chained writes and 100k reads, which yielded a single 1 bit error

It is quite slow the store() speed is ~430/sec, I haven’t tested the reads.
A numbified version might be worth trying.

numba jit speeds up store() to a very impressive >80000 writes / sec.

The query… is more tricky to numbify. Stays at a damned 666 reads/sec

@cezar_t Indeed your (python) results are replicating the C version. Nice.

extra_answers_C_SEQ99997_N1000_P10

thanks @DanML

numbified query would reach a decent 15k queries/second rate.

@POv “numbifying” means using a just in time compiler package named numba. It has its quirks&limitations and code looks uglier and slightly longer than pure python

1 Like

Just for completeness: the 500k python run stats - again matching the C code in output.

extra_answers_SEQ500000_N1000_P10

Thanks again @DanML

I provided a sdrsdm.py on both mine and Peter’s github. Implements TriadicMemory and DiadicMemory (which stores a value SDR under a key SDR)
Using numba makes it really fast - DiadicMemory exceeds 100k writes/second and performs queries at a rate of 18k/sec
Though that performance is with a relatively short, very sparse SDR of 10/1000 bits
Mine has DiadicMemory test code included in “main

Peter’s is clean library - example usage and tests will follow

1 Like

See diadic_test.py and triadic_test.py on github

As expected performance degrades with solidity (number of ON bits) yet for 32/2048 bits SDRs it still has a decent >8k/sec writes and >4k/sec reads.

Memory footprint for 2048bit SDRs is ~4GByte

Recall is virtually 100% until a certain threshold of write operations.
On the above 32/2048 SDRs this threshold is around 0.7-0.8 million random writes.

After the threshold errors are quite often but even then, most wrong SDRs are one bit wrong.

PS the above figures are for diadic memory

To make things clearer:

diadic memory maps one key SDR x to one value SDR y. The relationship is unidirectional, cannot query the key knowing the value.

triadic memories maps three SDRs x,y,z to each other, which means any member of the triplet can be remembered with partial knowledge of the other two members.

2 Likes

And on 10/1000 SDRs this threshold is around 20k writes.

Should probably present this as probability of error size at write count.
Can this be calculated in advance, to compare with empirical answers?

Are you sure? With the latest sdrsdm.py and diadic_test.py I get clean results at 400k writes, a couple errors at 450k

Those numbers are not for that version. Let me re-test with that one.

Just notice that the randomSDR function does not guarantee unique vectors, so you may get different results anyway.

What do you mean by unique? Randomness in general does not avoid duplicates, although in this case they are extremely unlikely.