The "delta" problem!

I had for sometime been thinking of the “delta”-problem ( this is how I dubbed it, dont know if it is a known problem in ML).
It is easier to illustrate it with example. Let say we have the following sequence :

10, 15, 12, 10, 20

later we get :

12, 17, 14, 12, 22

If you look at those 2 sequences they look different, but if you look at the deltas they are the same :

deltas/velocity: 5,-3, -2, 10

so the moral of the story is that if you record the deltas instead of the exact values you can encode
multiple similar sequences with fixed amount of memory.

How would you approach this problem ? Using clever Encoder ? Change in the SDR format, may be make it 2D ? Change in the TM ?
Expect hierarchy to solve this problem !
Ideas …

PS> I can also think of a “scale” problem … so if we talk more abstractly we can call it “Transformation” problem.

I don’t get it. Couldn’t you just encode the original and delta sequences as two different scalar inputs? Then the input would use both original and derived information.

Oh, are you saying the machine should somehow be able to figure out that the data sets share that similar quality once transformed? If so, I don’t believe that’s possible without designing it specifically to do that.

My reasoning for that is this: There is a lot of data in videos that we don’t perceive [example], that could be useful in some cases. There is also a lot of data that we do perceive, sometimes whether we want to or not, that isn’t actually in videos either [example]. This implies our perception is skewed in some ways but not others, likely because it helps us figure out and react to our environment in ways that help us survive better. [I might as well link [this guy](https://www.ted.com/talks/donald_hoffman_do_we_see_reality_as_it_is) while I’m at it.] So, our brains are specifically built to notice patterns in some transformed versions of our input, but not others.

If we’re able to see some transformations, but not others, then I think any machines we design would have to be designed to see the transformations we want it to see. (Though, that doesn’t mean it can’t modify itself to see more transformations if it ever notices them, like how we modify ourselves with glasses, 3D glasses, etc.)

I mean both sequences should be stored in the same memory space … as single pattern.
the machine will have to discover the pattern not the specific values.

For example if you provide 3rd sequence which starts like this :

7, 12 … it will predict 9, 7, 17

For example if you provide 3rd sequence which starts like this :

7, 12 … it will predict 9, 7, 17

Oh, I get it.

Yeah, to predict future position information given velocity, you need both position and velocity information. Also, given a third, unconnected object and knowing two other object’s velocities, I would have little reason to think the third object would act exactly the same as the other two and not just have constant velocity (7, 12, 17, 22…), so I’d say you have to manually set up the machine to store velocities, recognize specific patterns, etc.

As for how you’d get the machine to recognize patterns, that seems like it has more to to with cryptology than machine learning. You’d input a set of encrypted codes, apply various transforms, and output the original, or vice versa. To figure out the exact encrypting function applied to the original data… that’s not always possible.

Have a look at this?

1 Like

I have said this before, and I’ll keep saying it. Building out encoders is exciting business. I think a big reason HTM is still untapped is because we are amateurs at encoding data into semantic SDRs. There is a huge opportunity to innovate here. If anyone is working on an encoder and wants to share ideas about them, please post it to #htm-hackers.

If I weren’t so busy at Numenta, I would be working on encoders myself. Turning concepts into semantic data structures is interesting stuff. :nerd:

1 Like

Sorry for tickling an old topic,

I think an useful approach might be to encode the distance between each point and its nearby points average. “Nearby” meaning in time dimension.

1 Like