SoK: Certified Robustness for Deep Neural Networks

by   Linyi Li, et al.

Great advancement in deep neural networks (DNNs) has led to state-of-the-art performance on a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to adversarial attacks, which have brought great concerns when deploying these models to safety-critical applications such as autonomous driving. Different defense approaches have been proposed against adversarial attacks, including: 1) empirical defenses, which can be adaptively attacked again without providing robustness certification; and 2) certifiably robust approaches, which consist of robustness verification providing the lower bound of robust accuracy against any attacks under certain conditions and corresponding robust training approaches. In this paper, we focus on these certifiably robust approaches and provide the first work to perform large-scale systematic analysis of different robustness verification and training approaches. In particular, we 1) provide a taxonomy for the robustness verification and training approaches, as well as discuss the detailed methodologies for representative algorithms, 2) reveal the fundamental connections among these approaches, 3) discuss current research progresses, theoretical barriers, main challenges, and several promising future directions for certified defenses for DNNs, and 4) provide an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.


page 2

page 13

page 14

page 15

page 16

page 18

page 20

page 21


Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study

Deep neural networks (DNNs) have achieved significant performance in var...

RAB: Provable Robustness Against Backdoor Attacks

Recent studies have shown that deep neural networks (DNNs) are vulnerabl...

A Useful Taxonomy for Adversarial Robustness of Neural Networks

Adversarial attacks and defenses are currently active areas of research ...

Survey of Attacks and Defenses on Edge-Deployed Neural Networks

Deep Neural Network (DNN) workloads are quickly moving from datacenters ...

Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving

Deep neural networks (DNNs) have accomplished impressive success in vari...

CARE: Certifiably Robust Learning with Reasoning via Variational Inference

Despite great recent advances achieved by deep neural networks (DNNs), t...

Fast Approximate Spectral Normalization for Robust Deep Neural Networks

Deep neural networks (DNNs) play an important role in machine learning d...

Please sign up or login with your details

Forgot password? Click here to reset