Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness

07/22/2022
by   Chaoning Zhang, et al.
0

Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields. Integrating AT into SSL, multiple prior works have accomplished a highly significant yet challenging task: learning robust representation without labels. A widely used framework is adversarial contrastive learning which couples AT and SSL, and thus constitute a very complex optimization problem. Inspired by the divide-and-conquer philosophy, we conjecture that it might be simplified as well as improved by solving two sub-problems: non-robust SSL and pseudo-supervised AT. This motivation shifts the focus of the task from seeking an optimal integrating strategy for a coupled problem to finding sub-solutions for sub-problems. With this said, this work discards prior practices of directly introducing AT to SSL frameworks and proposed a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL). Extensive experimental results demonstrate that our DeACL achieves SOTA self-supervised adversarial robustness while significantly reducing the training time, which validates its effectiveness and efficiency. Moreover, our DeACL constitutes a more explainable solution, and its success also bridges the gap with semi-supervised AT for exploiting unlabeled samples for robust representation learning. The code is publicly accessible at https://github.com/pantheon5100/DeACL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2020

Towards Good Practices in Self-supervised Representation Learning

Self-supervised representation learning has seen remarkable progress in ...
research
02/05/2023

On the Role of Contrastive Representation Learning in Adversarial Robustness: An Empirical Study

Self-supervised contrastive learning has solved one of the significant o...
research
03/02/2023

Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning

Recent works have shown that self-supervised learning can achieve remark...
research
10/08/2022

Robustness of Unsupervised Representation Learning without Labels

Unsupervised representation learning leverages large unlabeled datasets ...
research
07/06/2023

Contrastive Label Disambiguation for Self-Supervised Terrain Traversability Learning in Off-Road Environments

Discriminating the traversability of terrains is a crucial task for auto...
research
10/20/2022

VIBUS: Data-efficient 3D Scene Parsing with VIewpoint Bottleneck and Uncertainty-Spectrum Modeling

Recently, 3D scenes parsing with deep learning approaches has been a hea...
research
08/08/2022

Self-Supervised Contrastive Representation Learning for 3D Mesh Segmentation

3D deep learning is a growing field of interest due to the vast amount o...

Please sign up or login with your details

Forgot password? Click here to reset