Reliability Validation of Learning Enabled Vehicle Tracking

02/06/2020
by   Youcheng Sun, et al.
0

This paper studies the reliability of a real-world learning-enabled system, which conducts dynamic vehicle tracking based on a high-resolution wide-area motion imagery input. The system consists of multiple neural network components – to process the imagery inputs – and multiple symbolic (Kalman filter) components – to analyse the processed information for vehicle tracking. It is known that neural networks suffer from adversarial examples, which make them lack robustness. However, it is unclear if and how the adversarial examples over learning components can affect the overall system-level reliability. By integrating a coverage-guided neural network testing tool, DeepConcolic, with the vehicle tracking system, we found that (1) the overall system can be resilient to some adversarial examples thanks to the existence of other components, and (2) the overall system presents an extra level of uncertainty which cannot be determined by analysing the deep learning components only. This research suggests the need for novel verification and validation methods for learning-enabled systems.

READ FULL TEXT

page 5

page 6

research
10/16/2020

Formal Verification of Robustness and Resilience of Learning-Enabled State Estimation Systems for Robotics

This paper presents a formal verification guided approach for a principl...
research
02/09/2020

Input Validation for Neural Networks via Runtime Local Robustness Verification

Local robustness verification can verify that a neural network is robust...
research
02/27/2021

NEUROSPF: A tool for the Symbolic Analysis of Neural Networks

This paper presents NEUROSPF, a tool for the symbolic analysis of neural...
research
03/21/2020

Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression

Learning-enabled components (LECs) are widely used in cyber-physical sys...
research
07/23/2020

Scalable Inference of Symbolic Adversarial Examples

We present a novel method for generating symbolic adversarial examples: ...
research
04/13/2021

Detecting Operational Adversarial Examples for Reliable Deep Learning

The utilisation of Deep Learning (DL) raises new challenges regarding it...

Please sign up or login with your details

Forgot password? Click here to reset