Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning

03/02/2023
by   Rundong Luo, et al.
9

Recent works have shown that self-supervised learning can achieve remarkable robustness when integrated with adversarial training (AT). However, the robustness gap between supervised AT (sup-AT) and self-supervised AT (self-AT) remains significant. Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap. To resolve this dilemma, we propose a simple remedy named DYNACL (Dynamic Adversarial Contrastive Learning). In particular, we propose an augmentation schedule that gradually anneals from a strong augmentation to a weak one to benefit from both extreme cases. Besides, we adopt a fast post-processing stage for adapting it to downstream tasks. Through extensive experiments, we show that DYNACL can improve state-of-the-art self-AT robustness by 8.84 even outperform vanilla supervised adversarial training for the first time. Our code is available at <https://github.com/PKU-ML/DYNACL>.

READ FULL TEXT

page 2

page 5

research
05/17/2023

Rethinking Data Augmentation for Tabular Data in Deep Learning

Tabular data is the most widely used data format in machine learning (ML...
research
03/23/2022

Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

Recent studies in deepfake detection have yielded promising results when...
research
07/22/2022

Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness

Adversarial training (AT) for robust representation learning and self-su...
research
06/24/2023

Similarity Preserving Adversarial Graph Contrastive Learning

Recent works demonstrate that GNN models are vulnerable to adversarial a...
research
06/10/2022

Masked Autoencoders are Robust Data Augmentors

Deep neural networks are capable of learning powerful representations to...
research
12/24/2020

Adversarial Momentum-Contrastive Pre-Training

Deep neural networks are vulnerable to semantic invariant corruptions an...
research
08/23/2021

Jointly Learnable Data Augmentations for Self-Supervised GNNs

Self-supervised Learning (SSL) aims at learning representations of objec...

Please sign up or login with your details

Forgot password? Click here to reset