Semantic Folding Theory: how to map contexts to 2D map?

The Cortical whitepaper states the following on page 22:

The context themselves represent vectors that can be used to create a two-dimensional map in such a way that similar context-vectors are placed closer to each other that dissimilar ones, […]

I am interested in how this works in practice. The paper vaguely refers to “topological (local) inhibition mechanisms and/or competitiv Hebbian learning principles” (same paragraph). However, it does not clarify how exactly this works.

I envision a large number of context vectors like this:

c_1: w1 w2 w3 w4 w5 c_2: w2 w4 w6 w7 w8 c_3: ... ... c_n: ...

with a very large value for n.The goal is now to project these n context vectors to a two-dimensional map with a much smaller dimensionality (e.g. 128x128) with similar vectors – measured by the overlap of the words – ending up close to each other. How is this done? From the paper, I understand that this specific part does not necessarily make use of HTM at all, does it?

3 Likes

Your understanding is spot on

  • this doesn’t make use of HTM. (Semantic Folding theory positions itself as a complementary theory to HTM, see: https://en.wikipedia.org/wiki/Semantic_folding).
  • As the Cortical Whitepaper states, it makes use of competitive learning techniques to build the 2 dimensional map. The wikipedia page on competitive learning gives a kind of minimalist introduction but you can probably get a few suggestions for further reading from there: https://en.wikipedia.org/wiki/Competitive_learning
4 Likes