Questions on layer stacking, real-time prediction, and real-time learning

I’ve gotten caught up on HTM school; and I’ve begun working with the HTM framework - so far I’ve just been using it to predict scalars for the fun of it. MUCH faster to code and train than traditional deep learning systems. My next step is twofold: Realtime prediction, and realtime learning. What I mean by this is, suppose I have a rudimentary interactive shell. I feet my network a few numbers, or a string, get it parsed, and sent through the network, and I’m given a prediction value, all interactively. Second, and at the same time, I’d want to engage the SP and TM to learn as I enter in and feed it more data. Let’s just assume it’s a scalar or two, for now, for simplicity. How could I accomplish this?

Furthermore, where can I find some tips or documentation on stacking layers? By that I mean I want to create a… pipeline, I guess, in the form:

SP1 - > TM1 -> SP2 -> TM2 -> (and so on)

I’m not really sure whether or not I’d need spatial poolers between each layer or not, or if there’s anything else I’d need to set up in my network or model YAML file in order to accomplish this.

Thanks a ton for helping me! I’m new to HTM systems but quite excited to learn and work with them!

5 Likes

Hi @roz303, welcome to the forums!

@marty1885 has been experimenting with this sort of thing lately, in his Etaler implementation. See Experimenting with stacking Spatial Poolers for some of his results.

2 Likes

From the sound of it, stacking spatial poolers alone sounds like it would at some “depth” to semantic structures the SP layers provide. It sounds to me like it’d be able to form deeper relations between larger structures - as in, perhaps, forming semantic associations between semantically similar english sentences… or something. Still new to this, still way off. But I didn’t think about stacking JUST spatial poolers! That’s a neat concept. Sometimes I forget that an HTM could have multiples of the same components. Thanks for the lead!

1 Like

Also: Correct me if I’m wrong - but would I need to use multiple scalar encoders if I wanted to input a “vector” at each timestep? As in:

T0: 3,4,5,2
T1: 2,5,2,6
Tn: 2,3,6,2

Or is there another way?

It would be great if someone could come up with a N dimensional encoder. But unfortunately none have done it yet. However, I suggest using a 2D grid cell encoder when you’re dealing with 2 values that are related.

I was considering something like word2vec but all I’d need it for is to convert words to numbers.

Could you link me to grid cell docs / code? The last episode of HTM school was a real cliffhanger for grid cells!

1 Like

I had the same idea too… Well… this sure sounds like a cool research topic! How to encode a N-dimentional vector into SDRs. Anyone?

I don’t know where it lives within NuPIC. But I can show you the implementation in Etaler.

and

I’ll have to review docs on SDR’s, but in an SDR with, say, 2049 bits (keeping it odd to avoid centering problems), and still maintaining the two percent factor, you’d only be able to encode roughly two sixteen bit integers, or one 32 bit floating point value.

So I think one would have to work with some rather large SDR’s for N-dimensional vectors, something as large as 2049 * N dimensions.

Not that I’m complaining, though! If I’m on any sort of reasonable train of thought, it’s just a call for more computer hardware in the name of real AI research!

As it stands now, though, I think it’d be possible to define an N-dim vector as a series of scalar encoders referenced under the same “name” so to speak - as in, each encoder would have the name “vector_1, vector_2… vector_n” and same references. But this could be grossly un-scalable, much less a practical idea.

I am not really aware of anyone that expects a neurons to do much better than a few percent resolution.
Have you considered a patch of a few bits positioned somewhere along the the SDR, with a resolution of a few percent as being a match to the signalling method?

I don’t think that trying to code floating point into SDRs is a good match to the technology. I see it more as being all about patterns and sequences.

Now groups of neurons …

Maybe I am wrong here - if so - the community will surely set me straight.

1 Like

You don’t. You encode the vector as a bit pattern any way you like, then you feed it into an SP. This will recognise recurring patters (locations) and assign them SDRs. Recurring locations in your input data will get SDRs, but the SDR does not ‘encode’ the location.

If you feed the resulting SDRs into a TM you will get recognition of spatial sequences, IOW paths through XY space. You still won’t know where the path goes, just that it’s a popular choice.

3 Likes

Grid cell mechanisms are used to encode location information in SDRs. In HTM theory we think that distal input to an SP (instead of self-referential input as in the TM) can represent location. We have written papers on this subject. See A Theory of How Columns in the Neocortex Enable Learning the Structure of the World and Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells.

1 Like