PaRoT: A Practical Framework for Robust Deep Neural Network Training

01/07/2020
by   Edward Ayers, et al.
9

Deep Neural Networks (DNNs) are finding important applications in safety-critical systems such as Autonomous Vehicles (AVs), where perceiving the environment correctly and robustly is necessary for safe operation. Raising unique challenges for assurance due to their black-box nature, DNNs pose a fundamental problem for regulatory acceptance of these types of systems. Robust training — training to minimize excessive sensitivity to small changes in input — has emerged as one promising technique to address this challenge. However, existing robust training tools are inconvenient to use or apply to existing codebases and models: they typically only support a small subset of model elements and require users to extensively rewrite the training code. In this paper we introduce a novel framework, PaRoT, developed on the popular TensorFlow platform, that greatly reduces the barrier to entry. Our framework enables robust training to be performed on arbitrary DNNs without any rewrites to the model. We demonstrate that our framework's performance is comparable to prior art, and exemplify its ease of use on off-the-shelf, trained models and on a real-world industrial application: training a robust traffic light detection network.

READ FULL TEXT

page 2

page 13

research
01/07/2020

PaRoT: A Practical Framework for Robust Deep NeuralNetwork Training

Deep Neural Networks (DNNs) are finding important applications in safety...
research
09/18/2019

Using Quantifier Elimination to Enhance the Safety Assurance of Deep Neural Networks

Advances in the field of Machine Learning and Deep Neural Networks (DNNs...
research
06/09/2022

DORA: Exploring outlier representations in Deep Neural Networks

Deep Neural Networks (DNNs) draw their power from the representations th...
research
03/29/2023

Poster: Link between Bias, Node Sensitivity and Long-Tail Distribution in trained DNNs

Owing to their remarkable learning (and relearning) capabilities, deep n...
research
10/16/2021

TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks

Deep neural networks (DNNs) are now the de facto choice for computer vis...
research
01/13/2022

Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering

Deep neural networks (DNNs) have demonstrated superior performance over ...
research
10/19/2018

Safe Reinforcement Learning with Model Uncertainty Estimates

Many current autonomous systems are being designed with a strong relianc...

Please sign up or login with your details

Forgot password? Click here to reset