Generalist: Decoupling Natural and Robust Generalization

03/24/2023
by   Hongjun Wang, et al.
0

Deep neural networks obtained by standard training have been constantly plagued by adversarial examples. Although adversarial training demonstrates its capability to defend against adversarial examples, unfortunately, it leads to an inevitable drop in the natural generalization. To address the issue, we decouple the natural generalization and the robust generalization from joint training and formulate different training strategies for each one. Specifically, instead of minimizing a global loss on the expectation over these two generalization errors, we propose a bi-expert framework called Generalist where we simultaneously train base learners with task-aware strategies so that they can specialize in their own fields. The parameters of base learners are collected and combined to form a global learner at intervals during the training process. The global learner is then distributed to the base learners as initialized parameters for continued training. Theoretically, we prove that the risks of Generalist will get lower once the base learners are well trained. Extensive experiments verify the applicability of Generalist to achieve high accuracy on natural examples while maintaining considerable robustness to adversarial ones. Code is available at https://github.com/PKU-ML/Generalist.

READ FULL TEXT
research
09/23/2019

Robust Local Features for Improving the Generalization of Adversarial Training

Adversarial training has been demonstrated as one of the most effective ...
research
03/19/2023

Randomized Adversarial Training via Taylor Expansion

In recent years, there has been an explosion of research into developing...
research
12/01/2021

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

In response to the threat of adversarial examples, adversarial training ...
research
04/26/2023

Generating Adversarial Examples with Task Oriented Multi-Objective Optimization

Deep learning models, even the-state-of-the-art ones, are highly vulnera...
research
05/30/2021

Robust Dynamic Network Embedding via Ensembles

Dynamic Network Embedding (DNE) has recently attracted considerable atte...
research
03/29/2023

Latent Feature Relation Consistency for Adversarial Robustness

Deep neural networks have been applied in many computer vision tasks and...
research
10/21/2022

Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks

An off-the-shelf model as a commercial service could be stolen by model ...

Please sign up or login with your details

Forgot password? Click here to reset