-
Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
We present Survival-OPT, a physical adversarial example algorithm in the...
read it
-
AI-GAN: Attack-Inspired Generation of Adversarial Examples
Adversarial examples that can fool deep models are mainly crafted by add...
read it
-
Robust Attribution Regularization
An emerging problem in trustworthy machine learning is to train models t...
read it
-
Improving Adversarial Robustness by Data-Specific Discretization
A recent line of research proposed (either implicitly or explicitly) gra...
read it
-
ReabsNet: Detecting and Revising Adversarial Examples
Though deep neural network has hit a huge success in recent studies and ...
read it

Jiefeng Chen

I am an Phd student at UW-Madison, co-advised by Yingyu Liang and Somesh Jha, working on trustworthy machine learning.