Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

02/22/2020
by   Chen Zhu, et al.
5

Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness. In principle, convex relaxation can provide tight bounds if the solution to the relaxed problem is feasible for the original non-convex problem. We propose two regularizers that can be used to train neural networks that yield tighter convex relaxation bounds for robustness. In all of our experiments, the proposed regularizers result in higher certified accuracy than non-regularized baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset