Triadic Memory — A Fundamental Algorithm for Cognitive Computing

@POv - i think there-s a very simple way to overcome memory size limitations.
“Canonicallly”, for e.g. DiadicMemory, one needs to allocate a 2D byte array of N*(N-1) rows and N columns.

During mem.store(X, Y) - the bits of X are expanded into P*(P-1) row addresses and in each row Y is added as 00010011 … bits to the respective row.

One problem with this is for large-ish SDRs e.g. for 10kbit SDR one would have to allocate 0.5 trillion bytes of memory.

There is a simple trick to allocate an array with as many rows as you like: simply divide each row address modulo number of available rows.

Yes errors will happen sooner but:

  1. user has the option of balancing/negotiating between memory space and store() capacity.
  2. with a flat, 1D array (as in the C code) there is virtually no restriction in SDR length. They can be e.g. P=10, N=100000 long, and semantics becomes very rich, because each bit can represent a certain unique property.

@DanML

I had numba core dumping on me without other messages while rewriting an older similar module

I tracked it to the fact I had inadvertently accessed numpy array out of its range (addressing mistake).

1 Like