Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors

11/22/2022
by   Sizhe Chen, et al.
0

As data become increasingly vital for deep learning, a company would be very cautious about releasing data, because the competitors could use the released data to train high-performance models, thereby posing a tremendous threat to the company's commercial competence. To prevent training good models on the data, imperceptible perturbations could be added to it. Since such perturbations aim at hurting the entire training process, they should reflect the vulnerability of DNN training, rather than that of a single model. Based on this new idea, we seek adversarial examples that are always unrecognized (never correctly classified) in training. In this paper, we uncover them by modeling checkpoints' gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints' cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures in conventional ensemble. That is, our amazing performance of ensemble only requires the computation of training one model. By extensive experiments with 9 baselines on 3 datasets and 5 architectures, SEP is verified to be a new state-of-the-art, e.g., our small ℓ_∞=2/255 perturbations reduce the accuracy of a CIFAR-10 ResNet18 from 94.56% to 14.68%, compared to 41.35% by the best-known method.Code is available at https://github.com/Sizhe-Chen/SEP.

READ FULL TEXT

page 4

page 6

page 13

page 14

page 15

research
08/17/2022

Two Heads are Better than One: Robust Learning Meets Multi-branch Models

Deep neural networks (DNNs) are vulnerable to adversarial examples, in w...
research
08/10/2021

Enhancing Knowledge Tracing via Adversarial Training

We study the problem of knowledge tracing (KT) where the goal is to trac...
research
05/19/2022

CLCNet: Rethinking of Ensemble Modeling with Classification Confidence Network

In this paper, we propose a Classification Confidence Network (CLCNet) t...
research
10/13/2021

Well-classified Examples are Underestimated in Classification with Deep Neural Networks

The conventional wisdom behind learning deep classification models is to...
research
09/03/2022

Transfer Learning of an Ensemble of DNNs for SSVEP BCI Spellers without User-Specific Training

Objective: Steady-state visually evoked potentials (SSVEPs), measured wi...
research
10/21/2022

Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks

An off-the-shelf model as a commercial service could be stolen by model ...
research
08/31/2022

Connecticut Redistricting Analysis

Connecticut passed their new state House of Representatives district pla...

Please sign up or login with your details

Forgot password? Click here to reset