Misbehaviour Prediction for Autonomous Driving Systems
Deep Neural Networks (DNNs) are the core component of modern autonomous driving systems. To date, it is still unrealistic that a DNN will generalize correctly in all driving conditions. Current testing techniques consist of offline solutions that identify adversarial or corner cases for improving the training phase, and little has been done for enabling online healing of DNN-based vehicles. In this paper, we address the problem of estimating the confidence of DNNs in response to unexpected execution contexts with the purpose of predicting potential safety-critical misbehaviours such as out of bound episodes or collisions. Our approach SelfOracle is based on a novel concept of self-assessment oracle, which monitors the DNN confidence at runtime, to predict unsupported driving scenarios in advance. SelfOracle uses autoencoder and time-series-based anomaly detection to reconstruct the driving scenarios seen by the car, and determine the confidence boundary of normal/unsupported conditions. In our empirical assessment, we evaluated the effectiveness of different variants of SelfOracle at predicting injected anomalous driving contexts, using DNN models and simulation environment from Udacity. Results show that, overall, SelfOracle can predict 77 up to 6 seconds in advance, outperforming the online input validation approach of DeepRoad by a factor almost equal to 3.
READ FULL TEXT