Hi All,
am just looking for a steer to any discussion on this forum, or related, covering the questions:

How close are SDRs to “Graphs” (static or dynamic or not at all)?

What can, or has, HTM leveraged from Graph theory?

I see a few sites that tackle Brain/Graph modelling, but welcome the “HTM Theorists” position (on the relative value of Graphs for MI implementations … or not).

I’ve been studying the SP for a little while now, specifically trying to model it’s learning mechanism with classical computer science computational models (automata, graph, TM, etc). I’m not sure how relevant this is to your questions though, my apologies, but one thing I’ve found out based on my experiments is that the SP is trying to solve a flavor of a constraint satisfaction problem (CSP). To state it simply the constraints are the requirement to maximize the reachable states (nodes) and converging transitions (edges). At the lowest level the superposition of all synapse values at a certain point in the iteration (ith) resembles a state in an automata or a node in a graph. At the ith step there are reachable and unreachable states through certain inputs (edges in a graph) and all these graph/automata entities depend at least on the SDR dimensions/column dimensions and the input sequence. Using set theory, these state transitions or best described using graph theory as edges can be applied with a union operation to potentially get the states that detect a certain input and their the final/accept state, i call them detectors and acceptors. I’m still doing informal research on this as this is very interesting topic for me because working on the synapses and its emergent properties and learning from it is encompassing the HTM as everything is using a concept of a synapse. Also I’m hopeful that HTM will take a more of a computer science path that may revive the classical computational models as they are analytically well-defined.