A Sensorimotor Machine - proof of concept with existing tech

A great ambition I’d say, and very well described @jordan.kay.

The first order of business IMO is to design validation scenarios for such a system.
What would it look like if this were actually working?
How do we know its not overfitting to the task or just getting lucky?

My sense is that it’d be easier to use touch-based sensors rather than visual (to start at least) – since visual data is more non-trivial to even encode into SDRs (as I understand).

I find myself gravitating toward control task(s) of some kind, where the system learns to operate something on its own.

The task(s) should be complex enough to demonstrate non-trivial competency, which to me at least means:

  • multiple control movements at the system’s disposal

  • multiple and interdependent moving parts in the controlled environment (“plant”)

The learning process here reminds of a new human trainee, learning to become an air traffic controller for instance. He/she has to learn the dynamics of planes and crews, what they need to do and how much time and space they need to do it safely for all involved.

I think this objective (to control a somewhat complex system) can potentially show real robustness to the system, since there’s so much to learn and so many ways to screw up.

Early in the learning process the system would be clueless and dangerous, like a 16 year-old new driver who breaks abruptly, misses stop signs, cuts people off and bumps other cars when parking.
But when you get in the car w/them 2 years later all those incompetencies have receded, and you feel they’re a relatively “safe driver” on the whole. You have confidence that they could operate safely in novel scenarios, with unknown streets and traffic patters etc.

This kind of general confidence in system competency should be our holy grail IMO.
And I think this test environment design is the best place to start, since it make us put the rubber to the road.

1 Like