CGDTest: A Constrained Gradient Descent Algorithm for Testing Neural Networks

04/04/2023
by   Vineel Nagisetty, et al.
0

In this paper, we propose a new Deep Neural Network (DNN) testing algorithm called the Constrained Gradient Descent (CGD) method, and an implementation we call CGDTest aimed at exposing security and robustness issues such as adversarial robustness and bias in DNNs. Our CGD algorithm is a gradient-descent (GD) method, with the twist that the user can also specify logical properties that characterize the kinds of inputs that the user may want. This functionality sets CGDTest apart from other similar DNN testing tools since it allows users to specify logical constraints to test DNNs not only for ℓ_p ball-based adversarial robustness but, more importantly, includes richer properties such as disguised and flow adversarial constraints, as well as adversarial robustness in the NLP domain. We showcase the utility and power of CGDTest via extensive experimentation in the context of vision and NLP domains, comparing against 32 state-of-the-art methods over these diverse domains. Our results indicate that CGDTest outperforms state-of-the-art testing tools for ℓ_p ball-based adversarial robustness, and is significantly superior in testing for other adversarial robustness, with improvements in PAR2 scores of over 1500 shows that our CGD method outperforms competing methods we compared against in terms of expressibility (i.e., a rich constraint language and concomitant tool support to express a wide variety of properties), scalability (i.e., can be applied to very large real-world models with up to 138 million parameters), and generality (i.e., can be used to test a plethora of model architectures).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2021

A Search-Based Testing Framework for Deep Neural Networks of Source Code Embedding

Over the past few years, deep neural networks (DNNs) have been continuou...
research
02/17/2020

Scalable Quantitative Verification For Deep Neural Networks

Verifying security properties of deep neural networks (DNNs) is becoming...
research
02/12/2022

EREBA: Black-box Energy Testing of Adaptive Neural Networks

Recently, various Deep Neural Network (DNN) models have been proposed fo...
research
04/17/2023

Pointwise convergence theorem of generalized mini-batch gradient descent in deep neural network

The theoretical structure of deep neural network (DNN) has been clarifie...
research
11/02/2022

POLICE: Provably Optimal Linear Constraint Enforcement for Deep Neural Networks

Deep Neural Networks (DNNs) outshine alternative function approximators ...
research
07/31/2021

Towards Adversarially Robust and Domain Generalizable Stereo Matching by Rethinking DNN Feature Backbones

Stereo matching has recently witnessed remarkable progress using Deep Ne...
research
05/12/2022

Feedback Gradient Descent: Efficient and Stable Optimization with Orthogonality for DNNs

The optimization with orthogonality has been shown useful in training de...

Please sign up or login with your details

Forgot password? Click here to reset