Structural Robustness for Deep Learning Architectures

09/11/2019
by   Carlos Lassance, et al.
28

Deep Networks have been shown to provide state-of-the-art performance in many machine learning challenges. Unfortunately, they are susceptible to various types of noise, including adversarial attacks and corrupted inputs. In this work we introduce a formal definition of robustness which can be viewed as a localized Lipschitz constant of the network function, quantified in the domain of the data to be classified. We compare this notion of robustness to existing ones, and study its connections with methods in the literature. We evaluate this metric by performing experiments on various competitive vision datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2022

Improving Corruption and Adversarial Robustness by Enhancing Weak Subnets

Deep neural networks have achieved great success in many computer vision...
research
06/05/2018

An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks

Deep Neural Networks(DNN) have excessively advanced the field of compute...
research
08/01/2022

Understanding Adversarial Robustness of Vision Transformers via Cauchy Problem

Recent research on the robustness of deep learning has shown that Vision...
research
07/14/2020

Multitask Learning Strengthens Adversarial Robustness

Although deep networks achieve strong accuracy on a range of computer vi...
research
09/12/2022

Boosting Robustness Verification of Semantic Feature Neighborhoods

Deep neural networks have been shown to be vulnerable to adversarial att...
research
12/01/2021

Robustness in Deep Learning for Computer Vision: Mind the gap?

Deep neural networks for computer vision tasks are deployed in increasin...

Please sign up or login with your details

Forgot password? Click here to reset