Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness

09/21/2020 ∙ by Anh Bui, et al. ∙ 8

Ensemble-based adversarial training is a principled approach to achieve robustness against adversarial attacks. An important technique of this approach is to control the transferability of adversarial examples among ensemble members. We propose in this work a simple yet effective strategy to collaborate among committee models of an ensemble model. This is achieved via the secure and insecure sets defined for each model member on a given sample, hence help us to quantify and regularize the transferability. Consequently, our proposed framework provides the flexibility to reduce the adversarial transferability as well as to promote the diversity of ensemble members, which are two crucial factors for better robustness in our ensemble approach. We conduct extensive and comprehensive experiments to demonstrate that our proposed method outperforms the state-of-the-art ensemble baselines, at the same time can detect a wide range of adversarial examples with a nearly perfect accuracy.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

page 14

Code Repositories

Crossing-Collaborative-Ensemble

Tensorflow implementation of the AAAI-21 paper "Improving Ensemble Robustness by Collaboratively Promoting and Demoting Robustness"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.