Watermarking Pre-trained Encoders in Contrastive Learning

01/20/2022
by   Yutong Wu, et al.
0

Contrastive learning has become a popular technique to pre-train image encoders, which could be used to build various downstream classification models in an efficient way. This process requires a large amount of data and computation resources. Hence, the pre-trained encoders are an important intellectual property that needs to be carefully protected. It is challenging to migrate existing watermarking techniques from the classification tasks to the contrastive learning scenario, as the owner of the encoder lacks the knowledge of the downstream tasks which will be developed from the encoder in the future. We propose the first watermarking methodology for the pre-trained encoders. We introduce a task-agnostic loss function to effectively embed into the encoder a backdoor as the watermark. This backdoor can still exist in any downstream models transferred from the encoder. Extensive evaluations over different contrastive learning algorithms, datasets, and downstream tasks indicate our watermarks exhibit high effectiveness and robustness against different adversarial operations.

READ FULL TEXT

page 4

page 5

research
08/08/2022

AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning

As a self-supervised learning paradigm, contrastive learning has been wi...
research
01/18/2021

Scaling Deep Contrastive Learning Batch Size with Almost Constant Peak Memory Usage

Contrastive learning has been applied successfully to learn numerical ve...
research
10/12/2022

One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks

Current multimodal models, aimed at solving Vision and Language (V+L) ta...
research
09/05/2023

SeisCLIP: A seismology foundation model pre-trained by multi-modal data for multi-purpose seismic feature extraction

Training specific deep learning models for particular tasks is common ac...
research
04/21/2023

Contrastive Language, Action, and State Pre-training for Robot Learning

In this paper, we introduce a method for unifying language, action, and ...
research
05/16/2023

UOR: Universal Backdoor Attacks on Pre-trained Language Models

Backdoors implanted in pre-trained language models (PLMs) can be transfe...
research
07/15/2021

Multi-Level Contrastive Learning for Few-Shot Problems

Contrastive learning is a discriminative approach that aims at grouping ...

Please sign up or login with your details

Forgot password? Click here to reset