In the Jeff’s book, the observation was made, that when we enter a room, we notice the changes in the room, without thinking about the room.
I have a control system which I would like to impart the same properties. After watching the htm school, I have not figured out how to accomplish this.
You must imagine that every room you’ve ever been in as represented as SDRs and you should be able to do a union of current sensory features in a space against all the features you’ve sensed in the past in those places to narrow down the room. Read A Theory of How Columns in the Neocortex Enable Learning the Structure of the World and think of objects being the same thing as rooms.
People actually don’t notice changes a lot of the time. Worse yet, when someone don’t consciously notice a change they immediately start learning about the new version of their world and their chance to notice the change is lost. Animal brains may not be the best inspiration for your system.
Here is a funny demonstration of change blindness and inattention blindness:
I think part of this can be attributed to generalization – we are holding onto a mental model of the world around us at any given time, and many parts of that model are novel and unfamiliar. The gaps are filled in with semantics from past experiences. The less familiar we are with a particular detail, the more likely it is that we will not notice when it changes (and that can include even a very large and obvious details, like the person swaps).
However with something we are very familiar with and encounter frequently (for example the handle on the front door I open every day when I get home from work) even small changes (shifted an inch lower than normal) we would definitely notice. I would expect applying this to a system for identifying changes would probably be similar.
If you start from the naive idea that the world is perceived as an “image” to be analyzed then change blindness may be puzzling.
When you consider the vision system as it is and what it has to do then change blindness is almost predictable from first principles.
I have posted elsewhere on the nature of saccades and the way that this layers one small snapshot of the world onto the visual processing stream as a collection of 2 degree wide foveal features. [1]
Our relation to other objects (including people, predators, and prey) can present a constantly changing array of features as the objects turn and move. As this 2 degree view of objects transform due to motion the scale of features is an ever changing mix of features and visual angle separation - this moment it is a cube and the next, a corner of a cube.
Our “primitive view” processing (our sub-cortical structures) can track the center of mass and rough outlines but this primal sketch must be populated with a stream of features for recognition. Our internal representation is just this collection of features and relative positions.
The example video offered above presents the “victim” with as much change as if the person turned in relation to ourselves and for most people this is not a significant thing. If they are not looking directly at the person then even the details of gender may not register although I believe that to be one of the primitives recognized by sub-cortical structures.