ColumnPoolerRegion vs TemporalPoolerRegion

I see. What would an alternative be then? As I understand, NuPIC Network API, as it currently stands, does not allow implementing apical feedback. Is that correct? If so, when are you planning to include it in Network API? It is a very useful feature. It seems to me that complex sequence prediction problems would require a column pooler to combine various data types.

The Network API has no problem with feedback; itā€™s just another link. Our L2L4 networks use the API to implement feedback. You can see an example of the code here:

Hmmā€¦ I suppose in the sense that the pooling layer receives proximal input from active cells in another layer over multiple timesteps. But the implementation is quite different (note however that I last looked at the Union Pooler algorithm a few months ago, so possibly it has evolved since then?) Iā€™ll look at latest implementation to see what has changed.

1 Like

Well, I think you want objects with a lot of shared features to have a lot of overlapping bits in their representation. Ideally, if two objects share 90% of the same features, then their representations should share 90% of the same bits. Same for any percentage.

The main problem that I have experienced with unions over multiple timesteps has less to do with false positives as it has to do with knowing when one sequence or object ends and a new one begins. Seems like current research has just punted the issue, using reset and random SDR generation for each object during training. This allows them to focus on other facets of the sensory-motor system of course, so I donā€™t mean that as a criticism. Just pointing out that this is a prime area for us to explore.

1 Like

In my experiments with L4L2 network for Very simple sinus wave input i found that the active cells of L2 are generated and not changed even by online learning over time. Then I change the input type, e.g. to saw signal, the new set of active cells is generated accordingly. They remain no change until I change the input back to the sinus wave, 100% identical to sinus way at the begining. It is very surprised for me that the current L2-representation is fully different than the one at the begining. That means we do not have the same context in L2 for the same input situation!
Any comment or experience?

Thatā€™s exactly what Iā€™m talking about: in a dozen steps, you can have two times bigger coverage of the representation, and much higher chances to ā€œrecognizeā€ a wrong pattern.

Could you elaborate on this issue? Why canā€™t you use the same set of unions for all sequences?

I think these are two perspectives of the same problem, now that I think about it. I think the problem is that there are two things you need for a complete SMI model: what you are encountering (focus of current theory) and where it is located in relation to you. The latter implies some model of the space around you. Without it, how do you know when you have put down one object and are now reaching for another?

(EDIT) BTW to clarify what I meant by ā€œtwo perspectives of the same problemā€, your observations appear to be associated with object prediction (ā€œfalse positivesā€) whereas I my focus has been centered around object learning.

To clarify, this is ā€œegocentricā€ location (objects defined in relation to oneself) as opposed to ā€œallocentricā€ location, which refers to objects defined in relation only to themselves.

1 Like

Yep, Iā€™m looking forward to this part of the research when Numenta is far enough along to share it. I took a crack at coordinate conversions with neurons myself some time ago, and only gave myself a headache :slight_smile:

1 Like

@subutai Currently, ColumnPooler (implemented in ColumnPooler.py) is often used in your different experiments as L2 layer for object representation. For each object, we randomly generate a fixed SDR of the active cells and learn the connections between inputs to those active cells.
Please correct me if I am false.
By successfully learning 2 objects, I say first for A, then B, we will have 2 SDR of active cells, I say sdrA and sdrB, they are 2 sparse representations of object A and B.

sdrA = 48 126 274 309 403 493 1086 1233 1268 1275 1424 1505 1682 1833 1969 2052 2085 2168 2200 2278 2321 2387 2594 2692 2719 2880 2952 2961 3164 3209 3231 3258 3408 3474 3663 3676 3860 3910 4015 4018

sdrB = 164 285 447 538 550 700 916 956 1186 1338 1405 1562 1637 1642 1681 1795 1838 1879 1967 1985 2065 2285 2306 2499 2557 2599 2622 2654 2778 2782 2919 2984 3024 3033 3051 3212 3226 3311 3667 3726

Now I switch into the inference mode for inferencing object A, which uses again the active cells from previous step, here from the last learning: sdrB.
In my observation, _computeInferenceMode() provides
chosenCells = 5 6 7 8 9 10 11 12 13 14 15 16 17 18
activeCells = chosenCells

So the current object representation, i.e. the content of activeCells, does NOT have any overlap to sdrA and sdrB.

My questions:

  1. What is wrong here?
  2. Should we reinitialize the activeCells after learning?
    Could you please help me?
    Thanks