Are there any companies currently applying HTM tech for drone / robotics control?
I would imagine the real-time adaptation to changing conditions would be a large boon over pre-trained AI navigation/operation systems, as well as the lightweight computation and noise reduction from working with SDRs.
The biggest hurdle I read about is real-time object detection/classification informing movement/obstacle avoidance navigation systems, and my immediate thought is “images are still a little problematic, Retina isn’t quite finished”.
But drones have a variety of sensors beyond cameras: laser distance, inertial measurement units etc to track self-position and objects. I’m sure there’s plenty of currently-encodable data streams available.
My plan so far is something like “Step 1: gather ingredients, step 2: bake cake”:
- Get drone-typical input data stream (are there drone flight datasets?)
- Get drone simulation software, hook up HTM to controls for flight similar to above video
The above quadcopter neural network was trained with an added “policy” net that rewarded the quadcopter for getting closer (in 3d space) to its designated destination). In simulation it’s easy to measure the XYZ position of drone and endpoint against each other, and in the live experiment they tracked it via cameras and constantly fed the updated position to the neural net controller.
I think the most common scenarios for drone navigation would be:
- From GPS coordinates X1Y1, go to coordinates X2Y1
- From starting position, go direction D, avoiding obstacles, performing certain task (until stop?)
These feel quite similar, and I suppose the only difference would be in 2) there’s no set end-coordinates, just a direction to travel. In both cases the drone would have to read its height/depth with… a laser, I guess?
I may be overlooking the biggest question: HTMs are about predicting the next datapoint by looking for patterns in the last N datapoints. I figure this would lend it strength in object classification (a bird and plane may look vaguely similar, but have quite different movement patterns - video feed is a series of previous images), but again we’re not exactly there yet, and most systems may use something like yolonet.
So what is the HTM supposed to predict? Depends on the task, I suppose - predicted the next best movement for the drone, for example by predicting the optimal output of rotor #2 to guide it to the GPS coordinates (based on last N drone position GPS ticks).