Self Driving Robot

How can I use Nupic to develop a self driving robot?

That is a very broad question. :sweat_smile:

Well if you want the HTM to be the driver of the vehicle, I don’t think we can do that without help from something else today. But we’re researching sensorimotor inference, and have released a lot about the current theory on our forums and in videos.

Below are pictures of the lane my robot will be running on.


When the robot sees this frame, it will move in the forward direction.

and when the robot sees this frame, it will move in the left direction.
How would HTM help me in classifying these directions?

There are of course more efficient (unintelligent) methods for making a robot follow a black strip, but I agree it would be fun to try and incorporate HTM. A couple ideas come to mind.

  1. You could train the robot, using an unintelligent method, having HTM remember the sequences of movements with maybe a time encoder. Then take away the black strip and have the robot drive the pattern by memory. This could probably be done by adding some external logic around existing functions of NuPIC.

  2. You could use sensory-motor integration to train a simple sequence memory for “when I see this, do this”. Sensory-motor integration is still in research, so depending on the approach, NuPIC might not have all the functions that would be needed to do this.

  3. You could use sensory-motor integration plus reinforcement learning to train the robot to stay on the black strip. This could for example be done with reward/punishment buttons that you press any time the robot does/ doesn’t do what you want, until it learns to stay on the black strip. Again, NuPIC doesn’t currently have all the functions that would be needed for this. There are a few folks on the forum exploring reinforcement learning in HTM. At a high level, my approach is to use a modified spatial pooler to score action columns with the most predicted reward. I’m still working out the details on this concept though, so it may not actually be the best approach.

1 Like

As for writing an encoder for the track, as long as you have good contrast, a naive approach might be to convert the image to boolean monochrome, then sample the bits with a spatial pooler. That would of course lead to multiple different encodings for “forward” and “turn left” (depending on angle and position in the frame), so others on here might have some more intelligent ideas for encoding these. There are other image AI algorithms that fit this problem better I think.

1 Like

Thanx Paul for your suggestion.
One more question: If I am to compare the performance of this learning/memorising technique, what is best nearest choice to compare with ?