DeepAI AI Chat
Log In Sign Up

Guiding Deep Learning System Testing using Surprise Adequacy

08/25/2018
by   Jinhan Kim, et al.
KAIST 수리과학과
Chalmers University of Technology
0

Deep Learning (DL) systems are rapidly being adopted in safety and security critical domains, urgently calling for ways to test their correctness and robustness. Testing of DL systems has traditionally relied on manual collection and labelling of data. Recently, a number of coverage criteria based on neuron activation values have been proposed. These criteria essentially count the number of neurons whose activation during the execution of a DL system satisfied certain properties, such as being above predefined thresholds. However, existing coverage criteria are not sufficiently fine grained to capture subtle behaviours exhibited by DL systems. Moreover, evaluations have focused on showing correlation between adversarial examples and proposed criteria rather than evaluating and guiding their use for actual testing of DL systems. We propose a novel test adequacy criterion for testing of DL systems, called Surprise Adequacy for Deep Learning Systems (SADL), which is based on the behaviour of DL systems with respect to their training data. We measure the surprise of an input as the difference in DL system's behaviour between the input and the training data (i.e., what was learnt during training), and subsequently develop this as an adequacy criterion: a good test input should be sufficiently but not overtly surprising compared to training data. Empirical evaluation using a range of DL systems from simple image classifiers to autonomous driving car platforms shows that systematic sampling of inputs based on their surprise can improve classification accuracy of DL systems against adversarial examples by up to 77.5

READ FULL TEXT
08/28/2018

DLFuzz: Differential Fuzzing Testing of Deep Learning Systems

Deep learning (DL) systems are increasingly applied to safety-critical d...
03/10/2021

A Review and Refinement of Surprise Adequacy

Surprise Adequacy (SA) is one of the emerging and most promising adequac...
02/09/2020

Importance-Driven Deep Learning System Testing

Deep Learning (DL) systems are key enablers for engineering intelligent ...
03/20/2018

DeepGauge: Comprehensive and Multi-Granularity Testing Criteria for Gauging the Robustness of Deep Learning Systems

Deep learning defines a new data-driven programming paradigm that constr...
07/06/2020

Model-based Exploration of the Frontier of Behaviours for Deep Learning System Testing

With the increasing adoption of Deep Learning (DL) for critical tasks, s...
05/17/2022

Hierarchical Distribution-Aware Testing of Deep Learning

With its growing use in safety/security-critical applications, Deep Lear...
12/31/2019

Automated Testing for Deep Learning Systems with Differential Behavior Criteria

In this work, we conducted a study on building an automated testing syst...