Holistic Adversarial Robustness of Deep Learning Models

02/15/2022
by   Pin-Yu Chen, et al.
1

Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability. With the proliferation of deep-learning based technology, the potential risks associated with model development and deployment can be amplified and become dreadful vulnerabilities. This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models, including attacks, defenses, verification, and novel applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2021

Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications

This tutorial aims to introduce the fundamentals of adversarial robustne...
research
04/01/2019

Robustness of 3D Deep Learning in an Adversarial Setting

Understanding the spatial arrangement and nature of real-world objects i...
research
10/29/2021

Holistic Deep Learning

There is much interest in deep learning to solve challenges that arise i...
research
07/06/2022

Adversarial Robustness of Visual Dialog

Adversarial robustness evaluates the worst-case performance scenario of ...
research
07/01/2023

SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency

Extensive studies have shown that deep learning models are vulnerable to...
research
11/19/2020

An Experimental Study of Semantic Continuity for Deep Learning Models

Deep learning models suffer from the problem of semantic discontinuity: ...
research
10/23/2019

A Useful Taxonomy for Adversarial Robustness of Neural Networks

Adversarial attacks and defenses are currently active areas of research ...

Please sign up or login with your details

Forgot password? Click here to reset