Anomaly detection for autonomous driving AI

Hi, first time posting here.

One important current approach to achieve autonomous driving is to collect massive amounts of driving data - video streams together with inertial, gps and steering/pedals states then train neural networks to… learn how to drive.
I learned at least Tesla and are doing this. The problem they face with this approach is they have massive amounts of data from human drivers doing driving (George Hotz mentions 8million miles collected in this interview Tesla has more).
They need to extract much fewer (1-2%?) examples which are most relevant or useful in training their NNs.

An anomaly detector would be useful to spot these relevant parts from an otherwise boring, very large data stream.

Is possible Numenta’s detector would be able to spot these significant moments from inertial, gps and driving state sensors only - brakes, speed, acceleration/brakes pedals, steering wheel dynamics, etc. I assume this is true because the driver needs to react somehow to the anomalies from her’s, driving point of view, and that reaction reflects in this low bandwidth, non-visual data stream as different from the majority of uneventful driving.


It depends on the data. I would have to see the data stream definitions with labelled anomalies to make a decision on whether HTM would work.

1 Like

A promising and exiting idea! I only speak for myself here but I strongly suspect the answer is YES.

I think there are certain common control sequences made by the driver under normal circumstances. For instance when I’m turning there’s a familiar sequence of:

deceleration --> turn signal (almost always :sweat_smile:) --> wheel turning --> acceleration --> wheel straightening out

Of course not every deceleration is followed by a turn, so HTM’s ability to learn and distinguish many sequences that have shared sub-sequences and narrow them down quickly should shine here.

That’s just one little thought, but with some sample data and well-defined purpose(s) for anomaly detection I’d gladly do some experimenting. I don’t suppose you have any or would know where I might find some?

I’m no expert on the state of the art in self-driving cars, but knowing the limitations of MLP-based algorithms (not meant for real-time, overfitting by nature IMO) I’d honestly be nervous to ride in one without a robust real-time monitoring system like HTM offers at least.

Also on the Numenta research side I think that self-driving cars is an AWESOME arena to apply the latest HTM sensory motor theory.

Might be worth taking a look at this fella’s walk through. He’s trying to use deep learning, but the environment might suit what this thread is needing, and it would allow results to be more accessible to a wider AI community.


Unfortunately I do not have any real life data labelled or not. I just thought this is a hot domain in which Numenta could find an interested partner.

It just came to me while watching what is doing: they sell a lane-tracking device running their open-sourced software. It is just a phone sized computer with two cameras, inertial sensors, gps, and an interface to car’s computer. They support a few dozen car models, ones with drive-by-wire capability and radar collision sensor.

Through their already sold devices they collected a lot of driving data which they use to improve/train the software/ AI model.
The data is collected in both autonomous and human driver mode.
What they also attempt, is to develop a full autonomous driving AI and previously collected data will be very useful for that.

What George Hotz mentioned is even they have collected millions of miles, the actual training set is much smaller, and I can only assume they-re using some statistics to cherry pick this smaller training set.

Other companies attempting to make autonomous vechicles probably are applying the same strategy.

@rhyolight from all I’ve read I understand the anomaly detector is unsupervised, I assume the labeled data you mention is only to estimate its capabilities?

@sheiser1 yes, that’s the kind of stuff I had in mind. There-s a certain “rythm” when a driver does a normal lane change and a different one when it reacts to an unexpected event… and probably skips the turn signal.

I do not have any data myself, I wonder if a phone just recording video (placed as a dashcam) together with accel/gyro over a few weeks commuting would show some relevant information.
If anomaly detector on accelerator/gyro/gps picks some moments in time, one can check the video record at that moment to see if it was an useful insight or not.

Of course easier would be to convince a developer like to provide a few hundred miles sample of their data for a test.

@MaxLee - yeah that is a great simulator. I like there are many parts bundled together and has a really good tutorial series. I think it could be a good training arena/environment for an embodied agent. Yeah the shape and “muscles” of a car. Not necessary to achieve autonomous driving but to have agents evolve/train/play together in a simulated world and see how they interact with each other.

Who knows, animal-type cognition might be the solution for autonomous driving.

1 Like

We would not feed these labels into the HTM, no. They would be needed for benchmarking performance of the algorithm.

My point with this comment is that we’ve known for a long time that HTM does provide value to lots of real temporal data streams if you can get the data in the right format, in the right interval, with the right semantics. This has always been the secret black magic of HTM, much like parameter tuning (and hyperparameter tuning) is the black magic of Deep Learning.

I have no doubt that HTM would provide meaningful and valuable anomaly indications from the data streams available to self-driving cars, if the streams were set up optimally for a properly tuned HTM system.

1 Like

If all we want is to validate this expriment and collect result, then we do not need real world data, we can use Udasity SIM for all this expriment,

1 Like