Guiding the retraining of convolutional neural networks against adversarial inputs

07/08/2022
by   Francisco Durán López, et al.
0

Background: When using deep learning models, there are many possible vulnerabilities and some of the most worrying are the adversarial inputs, which can cause wrong decisions with minor perturbations. Therefore, it becomes necessary to retrain these models against adversarial inputs, as part of the software testing process addressing the vulnerability to these inputs. Furthermore, for an energy efficient testing and retraining, data scientists need support on which are the best guidance metrics and optimal dataset configurations. Aims: We examined four guidance metrics for retraining convolutional neural networks and three retraining configurations. Our goal is to improve the models against adversarial inputs regarding accuracy, resource utilization and time from the point of view of a data scientist in the context of image classification. Method: We conducted an empirical study in two datasets for image classification. We explore: (a) the accuracy, resource utilization and time of retraining convolutional neural networks by ordering new training set by four different guidance metrics (neuron coverage, likelihood-based surprise adequacy, distance-based surprise adequacy and random), (b) the accuracy and resource utilization of retraining convolutional neural networks with three different configurations (from scratch and augmented dataset, using weights and augmented dataset, and using weights and only adversarial inputs). Results: We reveal that retraining with adversarial inputs from original weights and by ordering with surprise adequacy metrics gives the best model w.r.t. the used metrics. Conclusions: Although more studies are necessary, we recommend data scientists to use the above configuration and metrics to deal with the vulnerability to adversarial inputs of deep learning models, as they can improve their models against adversarial inputs without using many inputs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2019

RNN-Test: Adversarial Testing Framework for Recurrent Neural Network Systems

While huge efforts have been investigated in the adversarial testing of ...
research
04/30/2019

Test Selection for Deep Learning Systems

Testing of deep learning models is challenging due to the excessive numb...
research
01/08/2020

The Effect of Data Ordering in Image Classification

The success stories from deep learning models increase every day spannin...
research
02/11/2022

Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models

Attacks on deep learning models are often difficult to identify and ther...
research
05/29/2020

Applying the Decisiveness and Robustness Metrics to Convolutional Neural Networks

We review three recently-proposed classifier quality metrics and conside...
research
03/30/2021

A Convolutional Neural Network Approach to the Classification of Engineering Models

This paper presents a deep learning approach for the classification of E...
research
09/30/2022

Understanding Pure CLIP Guidance for Voxel Grid NeRF Models

We explore the task of text to 3D object generation using CLIP. Specific...

Please sign up or login with your details

Forgot password? Click here to reset