Question about linking individual TMs

Hi, relatively new to HTM here. I’ve been building my own code library as a way to really learn the ins and outs of HTM. So far I have working encoders, poolers and temporal memory. Now I want to move on and build an actual hierarchy with multiple separate TMs. I’m wondering about how exactly to connect separate sections that are at the same level within the hierarchy. Would it make more sense to add connections similar to the axial connections that allow cells to make predictions? Or connections between mini-columns, which add to the total overlap score of the proximal inputs rather than between cells?

I appreciate any tips, or any pointers to relevant papers that can help! Thanks!

3 Likes

We’ve thought about this too.

There’s a variety of ways you can do it. You can make the inter-module connections be either distal dendrites which create the predictive state for neurons, or you can make them proximal dendrites, which simply add to active state. You could also make them apical dendrites which would activate according to some other rule.

However, the big question is what is the order of execution of these laterally connected modules? You now have a recurrent connection and you need to figure out some way to step them in a stable manner.

In the neuroscience theory, laterally connected modules are running until they find some stable state. Maybe that could work, but I don’t know if a stable state is guaranteed.

Another thing you could do is put a delay on the interconnected dendrites so that module A at time t is receiving the distal inputs from module B at time t-1. This ensures you don’t have synchronization issues, but you aren’t guaranteed to operate on the same data at time t.

It’s still an open question and something we’ve been experimenting on.

2 Likes

I’d definitely recommend looking at this module, and the repo containing it:

It demonstrates how they use their Network API to connect different regions with their larger cortical column module. There are different types of connection regions can have to each other. For instance the output of one region can be used to depolarize another (input to its TM) or activate another (input to its SP). These interdependencies between regions align with the roles each region is thought to play. For instance Layer 6 is though to represent location, and it has a depolarizing effect on Layer 4.

Also @jacobeverist is right, the order of execution is a huge part of it. That aspect is handled in the module using Phases.

3 Likes

Thanks @jacobeverist and @sheiser1, that is helpful! I have a followup question.

Suppose I use proximal dendrites to influence activity levels across the two SPs. Putting aside the recurrence issue, my difficulty lies in determining how much influence the inter-SP proximal connections have vs. the external proximal connections. Does it make sense to simply add them together, so that a minicolumn becomes active if the total input is sufficient? Or would it be better to consider the external/internal connections separately, each with its own overlap threshold, and a minicolumn becomes active either if (A) its external proximal overlap score is high enough or (B) its inter-SP proximal overlap score is high enough.

The second option makes more sense to me in situations where you want it to be possible for an SP to light up with activity despite getting no external input at all, if its partner got an input and became active.

Do you have any thoughts on this issue?

Thanks again!

Yes. :slight_smile:

Let us know how it works out.

I’m not sure what you mean here by “inter-SP connections” versus “external proximal connections”.

1 Like

Hi, I have a very similar question, so I decided just to continue here instead of creating a new topic. Let me start from describing the high level idea of what I’m trying to accomplish.

I deal with RL setting. There’re two different variables encoded as different SDRs: state s_t and action a_t.
So, I have a state-action sequence [s_0, a_0, s_1, a_1, s_2, ..] and I want to make an agent, which learns two things.

  1. To memorize the sequence of states [s_0, s_1, s_2, .. ].
    Here’s also an important relation between states and actions - next state s_{t+1} depends on both s_t and a_t. This means there’s a hidden prediction function f_s: (s_t, a_t) \rightarrow s_{t+1} I want to learn.
    TM perfectly suits for this problem, but it requires to add an external action context a_t to depolarization phase, which I don’t know how to do.

  2. To memorize the sequence of actions [a_0, a_1, a_2, ..].
    Again, TM is ideal candidate for this problem, but… a hidden prediction function f_a depends only on state s_{t+1}, but not on action a_t, i.e. f_a: s_{t+1} \rightarrow a_{t+1}.
    This means that depolarization phase should only depend on external context instead of inner distal connections, which is another problem I don’t know how to solve with htm.core TM.

Right now I have a solution with just one TM, encoding sequences [(s_0, a_0), (s_1, a_1),..], i.e. where state-action pair is concatenated and considered as a whole unit. It’s a close solution, but not exactly what I want - it has some limitations that I want to get rid of.

The main problem with the current solution is that distal synapses s \rightarrow s, s \rightarrow a, a \rightarrow s, a \rightarrow a are all mixed up in segments.

I want to split them into 4 different kinds of segments and then to have a fine grained control on each of these kinds of segments - max number of segments, max size of segment, their activation threshold.

The possible solution to these problems could be if TM had these specific properties:

  • you can specify any arbitrary number of external contexts to TM
    • aka “distal external” context, but I’m not sure that’s a correct naming
  • each context is represented as SDR
    • I guess it’s probably implemented in TM, but I haven’t found any examples or guides on how to use this functionality.
  • also each context induces different kind of depolarizations defined by it’s own activation threshold
    • i.e. you can specify unique activation thresholds for both inner distal connections and each of the external contexts
    • then all different kinds of depolarizations are merged by AND operation - i.e. the cell becomes depolarized only if it has all kinds of depolarizations
  • consequently you can specify max number of segments and max size of each segment not only for inner distal connections, but also for each of the provided external contexts.

So, I’d like to know is it already implemented in the current version of TM and how to use it? Or maybe it’s somehow planned to be done?
If not, then maybe you saw similar cases with alternative solutions.

FYI: I use python binding to htm.core and I haven’t dug C++ sources yet. Also I know the details of TM/SP implementations almost only through BAMI chapters.

p.s. If anything is not clear in my post, I’ll be happy to edit it and add some clarifications. Thanks! :slight_smile:

1 Like

I’d recommend you check out the network api functionality. I’m not sure how it goes with htm.core as opposed to NuPIC, but I think the core functionality is relevant.

Rather than having a single TM region that takes state & action, you could have one region for each. This is done in some instances for Layers 4 and 6, which are believed to represent raw sensory input and location respectively. Grid cell mods in Layer 6 are used to depolarize cells in Layer 4 (provide TM distal context to them). So Layer 4 cells are informed not only by prior self activity but also location from Layer 6. I think this could help with disentangling your state & action fields.

You can see this in how the sensors are linked to the different regions, and how the regions are linked to each other. These network.link calls use the srcOutput and destInput arguments to tell the regions how to influence each other.

1 Like

Is there anybody who know about the htm.core network API?
I am searching for the way to build a system with network API, but I couldnt find any example regarding it.
Even though I found the sentence in API_CHANGELOG.md in htm.core
“The NetworkAPI is mostly unchanged, since it is a compatibility layer. Direct access to the algorithms APIs has changed:”
but I couldn’t do “from htm.engine import Network”.

I appreciate any information that can help! Thanks!

1 Like

I’m not sure, but maybe this’s the right one:

from htm.bindings.engine_internal import Network

Thanks for your information.

It might be right. I will check.

Hello everyone,
@twainlee
See https://github.com/htm-community/htm.core/blob/master/docs/NetworkAPI.md
Few days ago, @David_Keeney added this doc

2 Likes

The possible solution to these problems could be if TM had these specific properties:

  • you can specify any arbitrary number of external contexts to TM
  • aka “distal external” context, but I’m not sure that’s a correct naming
  • each context is represented as SDR
  • I guess it’s probably implemented in TM, but I haven’t found any examples or guides on how to use >this functionality.
  • also each context induces different kind of depolarizations defined by it’s own activation threshold
  • i.e. you can specify unique activation thresholds for both inner distal connections and each of the >external contexts
  • then all different kinds of depolarizations are merged by AND operation - i.e. the cell becomes >depolarized only if it has all kinds of depolarizations
  • consequently you can specify max number of segments and max size of each segment not only for >inner distal connections, but also for each of the provided external contexts.

So, I’d like to know is it already implemented in the current version of TM and how to use it? Or >maybe it’s somehow planned to be done?

The funcionality that you are talking about, is not implemented in htm.core.
In order to proceed with 2D object recognition project we need this.
So i started this issue.
In htm.core we have Thousand brains theory python code implemented and mainly the apical tiebreak TM,
which i think is the closest what we need. I am digging now in this code to figure out how it works & what can be used.

2 Likes

Since it seems htm.core doesnt have this functionality according to @zbysekz, I’m working on building it within my own little htm library to test the idea. If anyone is interested I’ll post an update to report on how it goes.

3 Likes

How about this one? This is the kind of Network API functionality I was referring to:

1 Like

How about this one? This is the kind of Network API functionality I was referring to:

Hmm, yes you are right, i tought that code in htm/advanced is separated, but it uses networkAPI and custom created regions:

  • py.RawSensor
  • py.RawValues
  • py.ApicalTMPairRegion
  • py.GridCellLocationRegion

My point was that we don’t have Temporal memory with the capability of only external distal connection. But this is incorporated somehow in these custom regions. I need to study more these, thanks for the link

1 Like

Update!

I’ve added functionality for training a spatial pooler on not only proximal inputs but on the SDRs produced by another pooler. I can’t remember the proper term for these connections, so for now I’m calling them apical connections.

I’ve programmed it such that these connections are considered separately from the proximal connections and have their own activity threshold. A minicolumn can become active due to either type of connection. The synapses are trained in basically the same way as proximal synapses.

I’ve tested it on one (very basic) example, and it was a complete success! I took a pair of SPs, each with a different encoder. I trained SP1 on scalar values x between [0, pi] and trained SP2 on both sin(x) and the output of SP1 at the same time.

Then, after training some sklearn regressors on the column data, I ran another sequence of values through SP1, but this time I only provided apical inputs to SP2–no proximal inputs. After collecting the output SDRs and translating them with the regressor, they perfectly matched the expected output!

My next step will be to test it on more complex data–like audiovisual stimuli, or perhaps text. I’d love to make a pair of poolers that can ‘read’ or ‘hear’ a word and generate a representation of an image matching the word! It will be a while before I can do so, since I first have to trawl the forums and figure out how to encode these sorts of inputs… Any pointers to speed me on my way would be greatly appreciated!

edit: the apical connections code has been added to the pyhtm file on the github repo if you’re curious about it

2 Likes

@Andrew_Stephan
Can i have few question about your experiment?

Considering this is the scheme of algorithm:

modify it if you wish on website draw.io

  • I would not call the connections to the SP2 “apical”, they have the same function as proximal inputs, except that they have it’s own threshold. Apical dendrites affects only depolarization, not activation, see: Why neurons have Thousands synapses…. In the apical_tiebreak TM algorithm from numenta, apical inputs are used for further disambiguation if there are more than one winner in the column. But i guess that’s not the only one explanation.

  • I am wondering what is the benefit or effect having there SP1 at all… what it can learn from random data? there are no spatial patterns in there, there is no need to maintain fixed sparsity if the data are preprocessed by the scalar encoder

  • About the results: I think SP2 learns with it’s synapses the relation between x and sin(x), so value of x will activate the same columns as the sin(x) right? I wonder if the result would be the same if you use x and sin(x) mixed in one SDR encoded by same encoder and feeded into proximal inputs of SP. My guess is that yes

2 Likes

Very good questions! Before I answer them, let me explain the experiment I want to do, which will put this little test into context. I was interested in the idea that separate regions of the brain, which process different types of inputs (i.e. video, audio, language, etc) can learn to relate their specific representations to one another. I.E., I type the word ‘dog’, and as I do so my brain generates a mental image of a dog even though I can’t see one right now–that’s really cool! So I wondered if I could simulate that process by using two separate SPs (or TMs, I’m not sure yet which I’ll need. I think it depends on the input), where one learns images and the other learns sounds or words. Perhaps if they are cross-linked in some way during learning, I can later show just one of them an input and get the other one to light up as if it had been presented with the matching input. This little experiment is my first step along the way to that goal.

You’re probably right, I don’t think apical is a good term for it. Is there a term that distinguishes inputs that behave like proximal connections except they come from another SP instead of from a sensor (encoder)?

That’s true, the SP1 isn’t necessary to the experiment since the scalar encoder with a single input already maintains a fixed sparsity and imposes a clear pattern on the inputs. I used SP1 there because wanted to make sure my inter-SP connection code was running properly. This was my first test of that code, so I used really simple inputs that probably rendered the SP1 redundant.

I think that’s probably true, and would be an interesting experiment! However, my goal with this experiment is to train on two inputs and let the SP2 learn those inputs are related, then take away one of the inputs and see if the SP2 can still represent the related, missing input–rather than just testing the SP’s ability to learn a function y = f(x). I now realize that is how I made it sound in the comments on the script, so I apologize for badly explaining my intent.

I hope that answers your questions! If I misunderstood, let me know. It’s very possible, since I’m still new.

2 Likes

I decided to go with the first option. There are multiple inputs and the total overlap determines the resulting activity–but the threshold is low, so only one of the inputs is needed to produce activity. (I.E. if one of the senses is unavailable/ the corresponding encoder is not used for a given iteration)

So far it works well for input correlation in my tests.