Once again I would like to mention the Semantic DB, previously mentioned here, with a github README here.
Sequences are a natural data-type in the SDB, so I took your example of sequence compression as a challenge. I wanted to see how hard, and if, I could implement it as an operator in the SDB. It took some work, but it now seems to be working correctly. So on to a quick demonstration. (Noting that the SDB shell has "sa: " as a prompt.)
We start by learning our starting sequences using these two learn rules:
sa: seq |one> => ssplit |ABCDEF>
sa: seq |two> => ssplit |GBCHBCDXY>
where ssplit is an operator that splits a string into a sequence. Noting however that scompress works with arbitrary sequences, and we are using simple sequences for clarity.
Now we use our new sequence compression operator, scompress:
sa: scompress[seq, cseq]
where “seq” is the source operator, and “cseq” is our destination operator. You can change them as required/desired.
Then to tidy things a little, we delete the original sequences using this short operator sequence:
sa: unlearn[seq] rel-kets[seq]
where rel-kets provides a list (in SDB terms, a superposition) of kets for which “seq” is defined.
and where unlearn then unlearns “seq” learn rules for those kets.
Then finally, we display the knowledge currently loaded into memory:
sa: dump
|context> => |Global context>
cseq |one> => |A> . |scompress: 0> . |E> . |F>
cseq |two> => |G> . |scompress: 1> . |H> . |scompress: 0> . |X> . |Y>
cseq |scompress: 0> => |scompress: 1> . |D>
cseq |scompress: 1> => |B> . |C>
cseq |*> #=> |_self>
And of course, we can go the other way, and reconstruct the original sequences. In fact, this is rather easy in the SDB. We simply use cseq^k. Ie, cseq applied k times. Let’s demonstrate that:
sa: cseq |one>
|A> . |scompress: 0> . |E> . |F>
sa: cseq^2 |one>
|A> . |scompress: 1> . |D> . |E> . |F>
sa: cseq^3 |one>
|A> . |B> . |C> . |D> . |E> . |F>
sa: cseq |two>
|G> . |scompress: 1> . |H> . |scompress: 0> . |X> . |Y>
sa: cseq^2 |two>
|G> . |B> . |C> . |H> . |scompress: 1> . |D> . |X> . |Y>
sa: cseq^3 |two>
|G> . |B> . |C> . |H> . |B> . |C> . |D> . |X> . |Y>
Noting that since we require k == 3 to reconstruct the original sequences, we can say this system has a hierarchical depth of 3.
Let’s do one more quick example:
sa: seq |one> => ssplit |ABCDEUVWXY>
sa: seq |two> => ssplit |BCD>
sa: seq |three> => ssplit |UVWZ>
sa: scompress[seq, cseq]
sa: unlearn[seq] rel-kets[seq]
sa: dump
|context> => |Global context>
cseq |one> => |A> . |scompress: 0> . |E> . |scompress: 1> . |X> . |Y>
cseq |two> => |scompress: 0>
cseq |three> => |scompress: 1> . |Z>
cseq |scompress: 0> => |B> . |C> . |D>
cseq |scompress: 1> => |U> . |V> . |W>
cseq |*> #=> |_self>
And in this case, k == 2 is sufficient to reconstruct the original sequences, so has a depth of 2.
I hope this is of at least a little interest, and provokes some interest in the SDB.
Feel free to contact me at garry -at- semantic-db.org