It reminds me of displacement cells , which can indicate a relative location from a point.
Thanks, @rhyolight. Your suggestion had me go back and read the 1000 brain papers again (a worthwhile endeavour). My understanding (and please correct me if I am wrong) is that grid cells, and therefore displacement cells, represent “a location relative to the object being sensed, not relative to the person sensing the object”. Grid cells providing the where on the object the sensor’s data is coming from and displacement cells providing the relative location of features (or the same feature after movement). All of this in the context of object recognition and/or composition.
This seems different to me than the traffic flow example in the Geospatial Cooridnate Encoder video. Given a fixed, albiet infinite, cooridnate system like the city map, and a sequences of drives to and from work, one can predict the likely pattern and, therefore, anomalies. If it took me 5 extra minutes to get from the McDonald’s to bank then there is unusual traffic in that part of the map. I am not trying to build an object model of the car or the road segment. If I used grid/displacement cells, am I modeling the concept of “unsual traffic”? Is the “object” I am trying reognise a specific traffic pattern? Does the fact that the feedforward data are tied to an objective (not relative to the observer) locations impact the predicitons?
Perhaps a different example would help illustrate my question. Instead of a pet relative to the pet owner. What about a boat on the seas? In a simple construct, given the wind direction and speed, and the boat and target locations, one can establish sequences from prior voyages and begin to see patterns of getting from point A to point B. This is all seen from outside the space on a single grid system. A is always at X1, Y1; B, always at X2, Y2. Given a wind coming from X3,Y3 at K knots, we can record the path of the boat at intervals. Based on previous crossings, if the boat is not making adequate progress from A to B then we might identify an anomaly (maybe the sail is damaged?).
What if the boat is going from point C to point B. Point C is at X3,Y3, which is the same distance from B as A is. Let’s say the wind angle and speed, relative to the boat, are the same as the previous voyage when it was leaving from A. Given the distance and wind direction/speed, we’d expect the same travel pattern from C to B as from A to B. But the cooridnate encoder would produce a very different encoding for a boat at A than it would for one at C, as the starting location and wind direction would be wholly different values. To the HTM network, A->B would be a different sequence than from C->B, correct? If so, wouldn’t this mean that the HTM network would not pick up on the similarities of the pattern?
But … if I am standing on B, with the wind at my back, and I see a boat directly infront of me, then I can estimate its arrival time as well as whether something is wrong (coming in too slowly or too quickly). I do not care if I am currently looking in the direction of A or C. My “sensor” data doesn’t have (or need) the absolute position of the boat on the grid, just its and the wind’s relative positions to me. A relative encoding would put the boat at an X,Y that is an offset to my position at B. The wind direction and speed would also be relative to B as well.
NOTE: I am not trying to solve this specific problem, just thinking generically. If there is a flaw in the example, feel free to ignore it.