Bringing this thread back: I’ve got thoughts on 1) fingerprint association-embedding, 2) adjacency matrices & other sequential feeding, and 3) an evolutionary question I can’t figure out.
I’ve been thinking more on chemical graph networks lately - the computation scaling is something HTM would work wonders for if we could figure out a good encoding.
It’s definitely tricky, since graphs are more… arbitrary, I guess, than most datatypes. A scalar has a definite place in the hierarchy between minimum and maximum, and images can be… sort of figured out with topology, sometimes. But a molecule graph could have three molecules or three thousand (can molecules have three thousand atoms?) and any number of connections, yet both are considered one data-point for input. 571,346 is larger than 5, but not more complex, dimensionally speaking.
Still looking into SMILE embeddings and whatever Spektral is using for its magic, but here’s the three points:
Convert individual atoms to fingerprints like @ycui mentioned Cortical did with words. @Paul_Lamb informed me that semantic unfolding is actually quite doable, in a sense of “deriving encodings based on co-occurrence in sentences/paragraphs”, so encoding atoms based on co-occurrence in molecules (from a chemical database, rather than wikipedia for text) seems rather canny.
Adjacency matrices, sequential feeding
First, I thought: Why not just use these SDR-fingerprints and feed in atoms one at a time, perhaps using some bits of the encoding to signal “connected to previous and/or next atom”?
tm.reset() to reset the sequence and tell the HTM “End Of Chemical”. Of course there’s no “free floating/disconnected subgraphs” in a molecule, the way some abstract graphs (social media networks etc) can be, but it’s a fun thought.
The brain can’t “perceive” an entire graph at once. I think.
I thought about “how the brain understands a graph/molecule” as opposed to how the brain understands the eyes looking at a cat, or memorizing some word associations. I can’t help but feel like we’re not really… set up for graphs, in terms of evolutionary mental circuitry. So maybe trying to convert one graph to one SDR isn’t the easiest way forward.
But then, there’s examples of graph structures that were evolutionarily advantageous to understand. Several villages linked by roads and trade routes, even a mental ‘family tree’ or some other social hierarchy.
Apes growing into early humans were social, tribal - even if you don’t draw it out, a social network is a graph, and since communication and cooperation is what enabled Homo Sapiens to rise so far, I reckon a lot of our brain is at work on social tasks.
So what part of our brain lets us understand complex social relationships? How do we encode our “graph” social network/hierarchy? My feeling is that we encode it piece by piece.
Consider your own social network as an example. Spend ~15 seconds trying to visualize “your entire social circle”, then think of one specific person. Who are they connected to? You can probably “focus” on that person and a few other people (who you know they closely associate with), that may or may not also be your friends.
Now think of someone “on the opposite end” of your social circle - perhaps you met them in a different country, or time in your life, and they probably don’t know each other. Or maybe your circle is pretty localized and they maybe know the first person, but you’re not sure.
You could do your best to mentally ‘trace’ from person to person, like a graph running an A->B pathfinding algorithm. But you’re thinking of one or a couple people at a time, and each of them is stored as a different neuron-wiring-cluster in your brain, each friend with their own associated neuronal pathways that inadvertently trigger when you think of them.
So it seems to me that your brain stores this “graph” by effectively building a connected series of nodes and edges in your brain out of neurons and synapses. If you try to think of “my whole extended social circle”, it’s sort of hard to visualize that graph with ~20+ people at once, so you zoom in and explore it piecewise.
Thus I can’t help but feel like our HTM-based architecture can’t “focus” on an entire graph the way we can focus on reading one specific word or seeing a face (even though the visual recognition task is quite multifarious itself). The brain doesn’t encode or record graphs the way it encodes “I know this image to be a platypus”; it instead builds the graph on the fly (continuous learning!) as a “meta-knowledge-structure”.
Does this sound at all sensible? I realize that I’m simplifying a great deal - your neuron “map” that lights up when you see a certain person’s face isn’t necessarily a tight “cluster” like a neat node in a digital graph, and most knowledge is encoded in similar fashions.