Max-Margin Adversarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training

12/06/2018
by   Gavin Weiguang Ding, et al.
0

We propose Max-Margin Adversarial (MMA) training for directly maximizing the input space margin. This margin maximization is direct, in the sense that the margin's gradient w.r.t. model parameters can be shown to be parallel with the loss' gradient at the minimal length perturbation, thus gradient ascent on margins can be performed by gradient descent on losses. We further propose a specific formulation of MMA training to maximize the average margin of training examples in order to train models that are robust to adversarial perturbations. It is implemented by performing adversarial training on a novel adaptive norm projected gradient descent (AN-PGD) attack. Preliminary experimental results demonstrate that our method outperforms the existing state of the art methods. In particular, testing against both white-box and transfer projected gradient descent attacks on MNIST, our trained model improves the SOTA ℓ_∞ ϵ=0.3 robust accuracy by 2%, while maintaining the SOTA clean accuracy. Furthermore, the same model provides, to the best of our knowledge, the first model that is robust at ℓ_∞ ϵ=0.4, with a robust accuracy of 86.51%.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset