The Cortical whitepaper states the following on page 22:
The context themselves represent vectors that can be used to create a two-dimensional map in such a way that similar context-vectors are placed closer to each other that dissimilar ones, […]
I am interested in how this works in practice. The paper vaguely refers to “topological (local) inhibition mechanisms and/or competitiv Hebbian learning principles” (same paragraph). However, it does not clarify how exactly this works.
I envision a large number of context vectors like this:
c_1: w1 w2 w3 w4 w5 c_2: w2 w4 w6 w7 w8 c_3: ... ... c_n: ...
with a very large value for n
.The goal is now to project these n context vectors to a two-dimensional map with a much smaller dimensionality (e.g. 128x128) with similar vectors – measured by the overlap of the words – ending up close to each other. How is this done? From the paper, I understand that this specific part does not necessarily make use of HTM at all, does it?