Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free

10/22/2020 ∙ by Haotao Wang, et al. ∙ 0

Adversarial training and its many variants substantially improve deep network robustness, yet at the cost of compromising standard accuracy. Moreover, the training process is heavy and hence it becomes impractical to thoroughly explore the trade-off between accuracy and robustness. This paper asks this new question: how to quickly calibrate a trained model in-situ, to examine the achievable trade-offs between its standard and robust accuracies, without (re-)training it many times? Our proposed framework, Once-for-all Adversarial Training (OAT), is built on an innovative model-conditional training framework, with a controlling hyper-parameter as the input. The trained model could be adjusted among different standard and robust accuracies "for free" at testing time. As an important knob, we exploit dual batch normalization to separate standard and adversarial feature statistics, so that they can be learned in one model without degrading performance. We further extend OAT to a Once-for-all Adversarial Training and Slimming (OATS) framework, that allows for the joint trade-off among accuracy, robustness and runtime efficiency. Experiments show that, without any re-training nor ensembling, OAT/OATS achieve similar or even superior performance compared to dedicatedly trained models at various configurations. Our codes and pretrained models are available at: https://github.com/VITA-Group/Once-for-All-Adversarial-Training.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Motivation and background

Deep neural networks (DNNs) are nowadays well-known to be vulnerable to adversarial examples 

szegedy2013intriguing; goodfellow2014explaining. With the growing usage of DNNs on security sensitive applications, such as self-driving bojarski2016end and bio-metrics hu2015face, a critical concern has been raised to carefully examine the worst-case accuracy of deployed DNNs on crafted attacks (denoted as robust accuracy, or robustness for short, following zhang2019theoretically), in addition to their average accuracy on standard inputs (denoted as standard accuracy, or accuracy for short). Among a variety of adversarial defense methods proposed to enhance DNN robustness, adversarial training (AT) based methods zhang2019theoretically; madry2017towards; chen2020adversarial are consistently top-performers.

While adversarial defense methods are gaining increasing attention and popularity in safety/security-critical applications, their downsides are also noteworthy. Firstly, most adversarial defense methods, including adversarial training, come at the price of compromising the standard accuracy tsipras2018robustness. That inherent accuracy-robustness trade-off is established both theoretically, and observed experimentally, by many works zhang2019theoretically; schmidt2018adversarially; sun2019towards; nakkiran2019adversarial; stutz2019disentangling; raghunathan2019adversarial. Practically, most defense methods determine their accuracy-robustness trade-off by some pre-chosen hyper-parameter. Taking adversarial training for example, the training objective is often a weighted summation of a standard classification loss and a robustness loss, where the trade-off coefficient is typically set to an empirical value by default. Different models are then trained under this same setting to compare their achievable standard and robust accuracies.

However, in a practical machine learning system, the requirements of standard and robust accuracies may each have their specified bars, that are often not naturally met by the “default” settings. While a standard trained model might have unacceptable robust accuracy (

i.e., poor in “worst case”), an adversarially trained model might compromise the standard accuracy too much (i.e., not good enough in “average case”). Moreover, such standard/robust accuracy requirements can vary in-situ over contexts and time. For example, an autonomous agent might choose to perceive and behave more cautiously, i.e., to prioritize improving its robustness, when it is placed in less confident or adverse environments. Besides, more evaluation points across the full spectrum of accuracy-robustness trade-off would also provide us with a more comprehensive view of the model behavior, rather than just two evaluation points (i.e., standard trained and adversarially trained with default settings).

Therefore, practitioners look for convenient means to explore and flexibly calibrate the accuracy-robustness trade-off. Unfortunately, most defense methods are intensive to train, making it tedious or infeasible to naively train many different configurations and then pick the best. For instance, adversarial training is notoriously time-consuming zhang2019you, despite a few acceleration strategies (yet with some performance degradation) being proposed recently zhang2019you; shafahi2019adversarial.

1.1 Our contributions

Motivated by the above, the core problem we raise and address in this paper is:

How to quickly calibrate a trained model in-situ, to examine the achievable trade-offs between its standard and robust accuracies, without (re-)training it many times?

We present a novel Once-for-all Adversarial Training (OAT) framework that achieves this goal for the first time. OAT is established on top of adversarial training, yet augmenting it with a new model-conditional training approach. OAT views the weight hyper-parameter of the robust loss term as a user-specified input to the model. During training, it samples not only data points, but also model instances from the objective family, parameterized by different loss term weights. As a result, the model learns to condition its behavior and output on this specified hyper-parameter. Therefore at testing time, we could adjust between different standard and robust accuraices “for free” in the same model, by simply switching the hyper-parameters as inputs.

When developing OAT, one technical obstacle we discover is that the resultant model would not achieve the same high standard/robust accuracy, as it could achieve when being training dedicatedly at a specific loss weight. We investigate the problem and find it to be caused by the unaligned and somehow conflicting statistics between standard and adversarially learned features ilyas2019adversarial. That makes it challenging for OAT to implicitly pack different standard/adversarially trained models into one. In view of that, we customize a latest tool called dual batch normalization, originally proposed by xie2019adversarial for improving (standard) image recognition, to separate the standard and adversarial feature statistics in training. It shows to become an important building block for the success of OAT.

A further extension we studied for OAT is to integrate it to the recently rising field of robust model compression or acceleration ye2019adversarial; gui2019adversarially; hu2020triple, for more flexible and controllable deployment purpose. We augment the model-conditional training to further condition on another model compactness hyper-parameter, implemented as the channel width factor in yu2018slimmable. We call this augmented framework Once-for-all Adversarial Training and Slimming (OATS). OATS leads to trained models that can achieve in-situ trade-off among accuracy, robustness and model complexity altogether, by adjusting the two hyper-parameter inputs (robustness weight and width factor). It is thus desired by many resource-constrained, yet high-stakes applications.

Our contributions can be briefly summarized as below:

  • [leftmargin=*]

  • A novel OAT framework that addresses a new and important goal: in-situ “free” trade-off between robustness and accuracy at testing time. In particular, we demonstrate the importance of separating standard and adversarial feature statistics, when trying to pack their learning in one model.

  • An extension from OAT to OATS, that enables a joint in-situ trade-off among robustness, accuracy, and the computational budget.

  • Experimental results show that OAT/OATS achieve similar or even superior performance, when compared to dedicatedly trained models. Our approaches meanwhile cost only one model and no re-training. In other words, they are free but no worse. Ablations and visualizations are provided for more insights.

2 Preliminaries

Given the data distribution over images and their labels

, a standard classifier (

e.g., a DNN) with parameter

maps an input image to classification probabilities, learned by empirical risk minimization (ERM):

where is the cross-entropy loss by default: .

Adversarial training

Numerous methods have been proposed to enhance DNN adversarial robustness, among which adversarial training (AT) based methods madry2017towards are arguably some of the most successful ones. Most state-of-the-art AT algorithms zhang2019theoretically; madry2017towards; sinha2017certifying; rony2019decoupling; ding2020mma optimize a hybrid loss consisting of a standard classification loss and a robustness loss term:

(1)

where denotes the classification loss over standard (or clean) images, while is the loss encouraging the robustness against adversarial examples. is a fixed weight hyper-parameter. For example, in a popular form of AT called PGD-AT madry2017towards and its variants tsipras2018robustness:

(2)

where is the allowed perturbation set. TRADES zhang2019theoretically use the same as PGD-AT, but replace

from cross-entropy to a soft logits-pairing term. In MMA training 

ding2020mma, is to maximize the margins between correctly classified images.

For all these state-of-the-art AT-type methods, the weight has to be set a fixed value before training, to determine the relative importance between standard and robust accuracies. Indeed, as many works have revealed tsipras2018robustness; zhang2019theoretically; schmidt2018adversarially, there seems to exist a potential trade-off between the standard and robust accuracies that a model can achieve. For example, in PGD-AT, is the default setting. As a result, the trained models will only demonstrate one specific combination of the two accuracies. If one wishes to trade some robustness for more accuracy gains (or vice versa), there seems to be no better option than re-training with another from scratch.

Conditional learning and inference

Several fields have explored the idea to condition a trained model’s inference on each testing input, for more controllablity. Examples can be firstly found from the dynamic inference literature, whose main idea is to dynamically adjust the computational path per input. For example, teerapittayanon2016branchynet; huang2018multiscale; kaya2019shallow augment DNN with multiple side branches, through which early predictions could be routed. figurnov2017spatially; wang2017skipnet allowed for an input to choose between passing through or skipping each layer. One relevant prior work to ours, the slimmable network yu2018slimmable, aims at training a DNN with adjustable channel width during runtime. The authors replaced batch normalization (BN) by switchable BN (S-BN), which employs independent BNs for each different-width subnetwork. The new network is then trained by optimizing the average loss across all different-width subnetworks. Besides, many generative models mirza2014conditional; karras2019style are able to synthesize outputs conditioned on the input class labels. Other fields where conditional inference is popular include visual question answering de2017modulating, visual reasoning perez2018film, and style transfer huang2017arbitrary; babaeizadeh2018adjustable; yang2019controllable; wang2019deep. All those methods take advantage of certain side transformation, by transforming them to modulate some intermediate DNN features. For example, the feature-wise linear modulation (FiLM) perez2018film layers influence the network computation via a feature-wise affine transformation based on conditioning information.

3 Technical approach

3.1 Main idea: model-conditional training by joint model-data sampling

The goal of OAT is to learn a distribution of DNNs , conditioned on , so that different DNNs sampled from this learned distribution, while sharing the set of parameters , could have different accuracy-robustness trade-offs depending on the input .

While standard DNN training samples data, OAT proposes to also sample one per data. Each time, we set to be conditioned on

: concretely, it will take a hyperparameter

as the input, while using this same

to modulate the current AT loss function:

(3)

The gradient w.r.t. each is generated from the loss parameterized by the current . Therefore, OAT essentially optimizes a dynamic loss function, with varying per sample, and forces weight sharing across iterations. We call it model-conditional training. Without loss of generality, we use cross-entropy loss for and as PGD-AT did in Eq (2): . Notice that OAT is also compatible with other forms of and , such as those in TRADES and MMA training. The OAT algorithm is outlined in Algo. 1.

Figure 1: The overall OAT framework illustration. The hyperparameter is set as both the input and the loss function hyperparameter. It is varied in training and can be specified during testing.

Our model sampling idea is inspired by and may be intimately linked with dropout srivastava2014dropout, although the two serve completely different purposes. Applying dropout to a DNN amounts to sampling a sub-network from the original one at each iteration, that consists of all units surviving dropout. So training a DNN with dropout can be seen as training a collection of explosively many subnetworks, with extensive weight sharing, where each thinned network gets sparsely (or rarely) trained. The trained DNN then behaves approximately like an ensemble of those subnetworks, enhancing its generalization. Similarly, OAT also samples a model per-data: yet each sampled model is parameterized by a different loss function, rather than a different architecture. Further, intuitively we could also interpret the OAT trained model as an ensemble of those sampled models; however, the model output at run time is conditioned on the specified input which could be considered by “model selection”, in contrast to the static “model averaging” interpretation in dropout.

How to encode and condition on

We first embed

from a scalar to a high dimensional vector in

. While one-hot label is a naive option, we use (nearly)-orthogonal random vectors to encode different s, for better scalability and empirically better performance.

We adopt the module of Feature-wise Linear Modulation (FiLM) perez2018film to implement our conditioning of on . Suppose a layer’s output feature map is , FiLM performs a channel-wise affine transformation on , with parameters and dependent on the input :

where the subscripts refer to the feature map of and the element of . and

are two multi-layer perceptrons (MLPs) with Leaky ReLU. The output for FiLM is then passed on to the next layers in the network. In practice, we perform that affine transformation after every batch normalization (BN) layer, and each MLP has two

-dimensional layers.

3.2 Overcoming a unique bottleneck: standard and adversarial feature statistics

After implementing the preliminary OAT framework in Section 3.1, an intriguing observation was found: while varying can get different standard and robust accuracies, those achieved numbers are significantly degraded compared to those from dedicatedly trained models with fixed . Further investigation reveals the “conflict” arising from packing different standard and adversarial features in one model seems to cause this bottleneck, and a split of BN could efficiently fix it.

Figure 2: Running mean (

-axix) and variance (

-axix) of the last BN layer from ResNet34 trained with PGD-AT and fixed values on CIFAR10. Each model trained with a different

corresponds to 512 elements (the BN mean/variance tensors), that is colored differently (see the legend).

To illustrate this conflict, we train ResNet34 using PGD-AT, with in the objective (1

) varying from 0 (standard training) to 1 (common setting for PGD-AT), on the CIFAR-10 dataset. We then visualize the statistics of the last BN layer,

i.e., the running mean and running variance, as shown in Fig. 2. While smoothly changing leads to continuous feature statistics transitions/shifts as expected, we observe the feature statistics of (black dots) seems to be particularly isolated from those of other nonzero s; the gap is large even just between and . Meanwhile, all nonzero s lead to largely overlapped feature statistics. That is also aligned with our experiments showing that the preliminary OAT tend to fit either only the cleans images (resulting in degraded robust accuracy) or only adversarial images (resulting in degraded standard accuracy). Therefore, we conjecture that the difference between standard ( = 0) and adversarial ( 0) feature statistics might account for the challenge when we try to pack their learning together.

We notice a line of very recent works xie2019adversarial; xie2019intriguingl: despite the problem setting and goal being very different with ours, the authors also reported similar difference between standard and adversarial feature statistics. They crafted dual batch normalization (dual BN) as a solution, by separating the standard and adversarial feature statistics in two dedicated BNs. That was shown to improve the standard accuracy for image recognition, while adversarial examples act as a special data augmentation in their application setting.

  Input: Training set , model , , maximal steps .
  Output: Model parameter .
  for  to  do
     Sample a minibatch of from ;
     Sample a minibatch of from ;
     Generate (using PGD);
     Update network parameter by Eq. (3)
  end for
Algorithm 1 OAT Algorithm Outline

We hereby customize dual BN for our setting. We replace all BN layers with dual BN layers in the network. A dual BN consists of two independent BNs, and , accounting for the standard and adversarial features, respectively. A switch layer follows to select one of the two BNs to be activated for the current sample. In xie2019adversarial; xie2019intriguingl, the authors only handle clean examples (i.e., = 0) as well as adversarial examples generated by “common” PGD-AT, i.e., = 1. Also, they only aim at improving standard accuracy at testing time. The switching policy is thus straightforward for their training and testing. However, OAT needs to take a variety of adversarial examples generated with all s between [0,1], for both training and testing. Based on our observations from Fig. 2, at both training and testing time, we route = 0 cases through , while all cases go to . At testing time, the value of is specified by the user, per his/her requirement or preference between teh standard and robust accuracies: a larger emphasizes robustness more, while a smaller for better accuracy. As demonstrated in our experiments, such modified dual BN is an important contributor to the success of OAT.

3.3 Extending from OAT to OATS: joint trade-off with model efficiency

Figure 3: The OATS framework illustration, with both hyperparameters and (width factor) as inputs. All superscripts indicate the width factor of corresponding subnetwork. controls whether to use or . controls which subnetwork to use. For example, if , adversarial images, denoted as in this case, are generated using the subnetwork with channel width (bottom row in the figure), and both and are forwarded by the channel width subnetwork.
  Input: Training set , , model , maximum steps , a list of pre-defined width factors.
  Output: Network parameter .
  for  to  do
     Sample a minibatch of from ;
     Sample a minibatch of ’s from ;
     Clear gradients: optimizer.zero_grad();
     for width factor in width factor list do
        Switch the S-BN to current width factor on network and extract corresponding sub-network;
        Generate (using PGD);
        Compute loss in Eq. (3): ;
        Accumulate gradients: loss.backward();
     end for
     Update by Eq. (3): optimizer.step();
  end for
Algorithm 2 OATS Algorithm Outline

The increasing popularity of deploying DNNs into resource-constrained devices, such as mobile phones, IoT cameras and outdoor robots, have raised higher demands for not only DNNs’ performance but also their efficiency. Recent works ye2019adversarial; gui2019adversarially have emerged to compress models or reduce inference latency, while minimally affecting their accuracy and robustness, pointing to the appealing goal of “co-designing” a model to be accurate, trustworthy and resource-friendly. However, adding the new efficiency dimension further complicates the design space, and makes it even harder to manually examine the possible combinations of model capacities and training strategies.

We extend the OAT methodology to a new framework called Once-for-all Adversarial Training and Slimming (OATS). The main idea is to augment the model-conditional training to further condition on another model compactness hyper-parameter. By adjusting the two hyper-parameter inputs (loss weight , and model compactness), models trained by OATS can simultaneously explore the spectrum along the accuracy, robustness and model efficiency dimensions, yielding the three’s in-situ trade-off.

Inspired by the state-of-the-art in-situ channel pruning method, slimmable network yu2018slimmable, we implement the model compactness level parameter as the channel width factor as defined in yu2018slimmable. Each width factor corresponds to a subnetwork of the full network. For example, the subnetwork with width factor only owns the (first) half number of channels in each layer. We could pre-define a set of high-to-low allowable widths, in order to meet from loose to stringent resource constraints. The general workflow for OATS, as shown in Fig. 3 and summarized in Algo. 2, is then similar with OAT, with the only main difference that each BN (either or ) in OAT is replaced with a switchable batch normalization (S-BN) yu2018slimmable, which will lead to independent BNs for different-width sub-networks. As a result, each subnetwork uses its own and . For example, assuming three different widths being used, then every one BN in the original backbone will become two S-BNs in OATS, and hence a total of six BNs. To train the network, we minimize the average loss in Eq. (3) of all different-width sub-networks.

4 Experiments

4.1 Experimental setup

Datasets and models

We evaluate our proposed method on WRN-16-8 zagoruyko2016wide

using SVHN

netzer2011reading and ResNet34 he2016deep using CIFAR-10 krizhevsky2009learning. Following chan2020jacobian

, we also include the STL-10 dataset 

coates2011analysis which has fewer training images but higher resolution using WRN-40-2. All images are normalized to .

Adversarial training and evaluation

We utilize -step PGD attack madry2017towards with perturbation magnitude for both adversarial training and evaluation. Following madry2017towards, we set =, = and attack step size for all experiments. Other hyper-parameters (e.g., learning rates, etc.) can be found in Appendix A. Evaluation results on three other attacks are shown in Appendix 4.4.

Evaluation metrics

Standard Accuracy (SA): classification accuracy on the original clean test set. SA denotes the (default) accuracy. Robust Accuracy (RA): classification accuracy on adversarial images generated from original test set. RA measures the robustness of the model. To more directly evaluate the trade-off between SA and RA, we define SA-RA frontier, an empirical Pareto frontier between a model’s achievable accuracy and robustness, by measuring the SAs and RAs of the models dedicatedly trained by PGD-AT with different (fixed) values. We could also vary input in the OAT-trained models, and ideally we should hope the resulting SA-RA trade-off curve to be as close to the SA-RA frontier as possible.

Sampling

Unless otherwise stated, s are uniformly sampled from the set during training in an element-wise manner, i.e., all s in a training batch are i.i.d. sampled from . During testing, could be any seen or unseen value in [0,1], depending on the standard/robust accuracy requirements, as discussed in Section 3.2.

SVHN WRN16-8 CIFAR-10 ResNet34 STL-10 WRN40-2
Figure 4: OAT: SA-RA Trade-off of different methods on three datasets. varies from the largest value to the smallest value in for the points from top-left to bottom-right on each curve. Results are also shown in a different form in Fig. 8 (-SA/RA curve) in Appendix B for readers’ reference.

4.2 Evaluation results for OAT

Evaluation results on three datasets are shown in Fig. 4. We compare three methods: the classical PGD-AT, our OAT with normal BN, and OAT with dual BN.

(a) SA
(b) RA
Figure 5: Generalization to unseen s of OAT models. Green and yellow circles are OAT models under s from training sets ( or ). Green and yellow stars are OAT models generalized to s outside training set (in ). Red circles are PGD-AT models trained with fixed s in .

Comparison with PGD-AT (SA-RA frontier)

shown in Fig. 4, OAT (dual BN) models can achieve wide range in-situ tradeoff between accuracy and robustness, which are very close or even surpassing the SA-RA frontier (red curves in Fig. 4). For example, on SVHN dataset, a single OAT (dual BN) model can be smoothly adjusted from the most robust state with 94.43% SA and 62.16% RA, to the most accurate state with 97.07% SA and 37.32% RA, by simply varying the value of input at testing time.

Effectiveness of dual BN for OAT

As shown in Fig. 4, OAT (dual BN) generally achieves much better SA-RA trade-off compared with OAT (normal BN), showing its necessity in model-conditional adversarial learning. One interesting observation is that, OAT (normal BN) models can easily collapse to fit only clean or adversarial images, which is in alignment with our observations on the difference between standard and robust feature statistics shown in Fig. 2. For example, on SVHN dataset, when ranges from to , OAT (normal BN) collapse at almost identical RA values (ranging from 61.81% to 62.62%) with low SA (less than ) at all time (blue curve in the first column of Fig. 4). In other words, the model “collapse” to fitting adversarial images and overlooking clean images regardless of the value. In contrast, OAT with dual BN can successfully fit different SA-RA trade-offs given different inputs. Similar observations are also drawn from the other two datasets.

Sampling set of in OAT training

Our default sampling set is chosen to be sparse for training efficiency. To investigate in the influence of sample set on OAT, we try another denser . Results on SVHN dataset are shown in Fig. 5. As we can see, OAT on and have almost identical performance ( has slightly better SA while has slightly better RA), and both have similar or superior (e.g., at ) performance compared with fixed PGD-AT. Hence, it is feasible to train OAT with more samples, even sampling continuously, although the training time will inevitably grow (see Appx. C for more discussions). On the other hand, we find that training on the sparse already yields good SA-TA performance, not only at several considered values, but also at unseen s: see the next paragraph.

Generalization to unseen

OAT samples from a discrete set of values and embed them into high dimensional vector space. One interesting question is that whether OAT model have good generalization ability to unseen values within the similar range. If yes, we can generalize OAT model to explore an infinite continuous set (e.g., ) during inference time, allowing for more fine-grained adjustable trade-off between accuracy and robustness. Experimental results on SVHN are shown in Fig. 5. The two OAT models are trained on and respectively, and we test their generalization abilities on . We also compare their generalization performance with PGD-AT models dedicatedly trained on . As we can see, OAT trained on and OAT trained on have similar generalization abilities on and both achieve similar or superior performance compared with PGD-AT dedicatedly trained on , showing that sampling from the sparser is already enough to achieve good generalization ability on unseen s.

More visualization and analysis could be checked from the appendices (e.g., Jacobian saliency etmann2019connection in Appendix 4.5).

width=1.0 width=0.75 width=0.5
Figure 6: OATS: SA-RA Trade-off of different methods on CIFAR-10 ResNet34 with different widths. Left, middle, right columns are the full network, width, and width sub-network respectively. varies from the largest to the smallest value in for the points from top-left to bottom-right on each curve. The same results are also shown in a different form in Fig. 9 (-SA/RA curve) in Appendix B for the readers’ reference.

4.3 Evaluation results for OATS

Baseline method

We first design the baseline method named PGD adversarial training and slimming (PGD-ATS), by replacing the classification loss in original slimmable network with the adversarial training loss function in Eq. (1). Models trained by PGD-ATS have fixed accuracy-robustness trade-off but can adjust the model width “for free” during test time. In contrast, OATS has in-situ trade-off among accuracy, robustness and model complexity altogether.111OATS models (1.18 GFLOPs) only have a tiny FLOPs overhead (around 1.7% on ResNet34), brought by FiLM layers and MLPs, compared with PGD-ATS baseline models (1.16 GFLOPs). Three width factors, , and , are used for both PGD-ATS and OATS, as fair comparison.222The original paper yu2018slimmable sets four widths including a smallest factor of 0.25. However, we find that while 0.25 width can still yield reasonable SA, its RA is too degraded to be meaningful, due to the overly small model capacity, as analyzed by schmidt2018adversarially. Therefore, we only discuss the three width factors 0.5, 0.75, and 1 in our setting.

The results on CIFAR-10 are compared in Fig. 6.333 Results of OATS with normal S-BN are not shown here since its performance is incomparable with other methods and the curve will be totally off range. We include those results in Fig. 9 for the readers’ reference. As we can see, OATS achieve very close SA-RA trade-off compared with PGD-ATS, under all channel widths. For example, at width , the most robust model achievable by OATS (top-left corner point of green curve) has even higher SA and RA ( for SA and for RA) compared with the most robust model achievable by PGD-ATS (top-left corner point of red curve).

4.4 Evaluation on more attacks

In this section, we show that the advantage of OAT over PGD-AT baseline holds across multiple different types of adversarial attacks. More specifically, we evaluate model robustness on three new attacks: PGD-20444Here we denote -step PGD attack as PGD- for simplicity., MI-FGSM dong2018boosting, and FGSM goodfellow2014explaining, besides the one (PGD-7) used in the main text. For PGD-20 attack, we use the same hyper-parameters as PGD-7, except increasing from to , resulting in a stronger attack than PGD-7. For MI-FGSM, we set perturbation level , iteration number and decay factor following the original paper dong2018boosting. For FGSM, we set perturbation level . Results on SVHN dataset are shown in Figure 7. Under all three different attacks, OAT (dual BN) models can still achieve in-situ tradeoff between accuracy and robustness over a wide range, that remains close to or even surpasses the SA-RA frontier (PGD-AT baseline).

PGD-20 MI-FGSM FGSM
Figure 7: SA-RA trade-off on more different attacks on SVHN dataset. varies from the largest to the smallest value in for the points from top-left to bottom-right on each curve.

4.5 Visual interpretation by Jacobian saliency

Jacobian saliency, i.e., the alignment between Jacobian matrix () and the input image , is a property desired by robust DNNs both empirically tsipras2018robustness and theoretically etmann2019connection. We visualize and compare Jacobian saliency of CAT and PGD-AT on SVHN, CIFAR10 and STL10 in Figures 10, 11 and 12 (in Appendix D), respectively. As we can see, the Jacobian matrices of CAT models align with the original images better as (and also the model robustness) increases. This phenomenon provides an additional evidence that our CAT model has learned a smooth transformation from the most accurate model to the most robust model. We also find that under the same , CAT model has better or comparable Jacobian saliency compared with PGD-AT models.

5 Conclusion

This paper aims to address the new problem of achieving in-situ trade-offs between acuuracy and robustness at testing time. Our proposed method, once-for-all adversarial training (OAT), is built on an innovative model-conditional adversarial training framework and applies dual batch normalization structure to pack the conflicting standard and adversarial features into one model. We further generalized OAT to OATS, achieving in-situ trade-off among accuracy, robustness and model complexity altogether. Extensive experiments show the effectiveness of our methods.

Broader Impact

Deep neural networks are notoriously vulnerable to adversarial attacks szegedy2013intriguing. With the growing usage of DNNs on security sensitive applications, such as self-driving bojarski2016end and bio-metrics hu2015face, a critical concern has been raised to carefully examine the model robustness against adversarial attacks, in addition to their average accuracy on standard inputs. In this paper, we propose to tackle the new challenging problem on how to quickly calibrate a trained model in-situ, in order to examine the achievable trade-offs between its standard and robust accuracies, without (re-)training it many times. Our proposed method is motivated by the difficulties commonly met in our real-world self-driving applications: how to adjust an autonomous agent in-situ, in order to meet the standard/robust accuracy requirements varying over contexts and time. For example, we may expect an autonomous agent to perceive and behave more cautiously, (i.e., to prioritize improving its robustness), when it is placed in less confident or adverse environments. Also, our method provides a novel way to efficiently traverse through the full accuracy-robustness spectrum, which would help more comprehensively and fairly compare models’ behaviors under different trade-off hyper-parameters, without having to retrain. Our proposed methods can be applied to many high-stakes real world applications, such as self-driving bojarski2016end, bio-metrics hu2015face, medical image analysis ronneberger2015u and computer-aided diagnosis mansoor2015segmentation.

References

Appendix A More detailed hyper-parameter settings

In this section, we provide more detailed hyper-parameter settings as a supplementary to Section 4.1

. All models on SVHN, CIFAR-10, STL-10 are trained for 80, 200, 200 epochs respectively. SGD with momentum optimizer and cosine annealing 

[loshchilov2016sgdr] learning rate scheduler are used for all experiments. Momentum and weight decay parameter are fixed to and respectively. We try all learning rates in for all experiments. We report the results of the best performing hyper-parameter setting for each experiment.

Appendix B -accuracy plots

In this section, we provide a new way to present the same results shown in Figures 4 and 6, by comparing SA/RA of different methods under different s in Figures 8 and 9, for the readers’ reference.

SVHN
WRN16-8
CIFAR-10
ResNet34
STL-10
WRN40-2
Figure 8: Comparison of trade-off between accuracy and robustness of different methods on three datasets. Top and bottom row show SA and RA under different ’s respectively.
width=1.0
width=0.75
width=0.5
Figure 9: Comparison of OATS with baseline PGD-ATS on CIFAR-10 with ResNet34 backbone. Top and bottom row show SA and RA under different s respectively. Left, middle, right columns are the full network, width, and width sub-network respectively.

Appendix C More discussions on sampling set

Discrete v.s. continuous sampling

Uniformly sampling from the continuous set achieves similar results as sampling from discrete and sparse (within for SA/RA on SVHN), but requires more epochs to converge. We also empirically find sampling small lambdas more densely converges faster.

OAT (normal BN) trained without

As discussed in Section 3.2, standard () and adversarial () features have very different BN statistics, which accounts for the failure of OAT with normal BN (when trained on both and ) and motivates our dual BN structure. One natural question to ask is: will OAT (normal BN) achieve good performance when it is trained only on s unequal to ? Experimental results show that OAT (normal BN) trained without (e.g., on ) achieve similar performance with PGD-AT baselines (within SA/RA on CIFAR10) at . But its best achievable SA ( on CIFAR10) is much lower than that of OAT with dual BN ( on CIFAR10).

Appendix D Visual interpretation by Jacobian saliency

In this section, we compare Jacobian saliency of OAT with PGD-AT, as discussed in Section 4.5. Visualization results on SVHN, CIFAR10 and STL10 are shown in Figures 10, 11 and 12, respectively.

(a) Original images in SVHN test set
(b) Jacobian saliency maps of OAT models
(c) Jacobian saliency maps of PGD-AT models
Figure 10: Jacobian saliency maps of OAT and PGD-AT models on SVHN. For (b) and (c), in each column are saliency maps of corresponding images in the same column of (a); in each row are saliency maps of models under different s ( from top row to bottom row).
(a) Original images in CIFAR-10 test set
(b) Jacobian saliency maps of OAT models
(c) Jacobian saliency maps of PGD-AT models
Figure 11: Jacobian saliency maps of OAT and PGD-AT models on CIFAR-10. For (b) and (c), in each column are saliency maps of corresponding images in the same column of (a); in each row are saliency maps of models under different s ( from top row to bottom row).
(a) Original images in STL-10 test set
(b) Jacobian saliency maps of OAT models
(c) Jacobian saliency maps of PGD-AT models
Figure 12: Jacobian saliency maps of OAT and PGD-AT models on STL-10. For (b) and (c), in each column are saliency maps of corresponding images in the same column of (a); in each row are saliency maps of models under different s ( from top row to bottom row).

Appendix E Ablation on encoding of

In this section, we investigate the influence of three different encoding schemes on OAT:

  • No encoding (None). is taken as input a scalar, e.g., 0.1, 0.2, etc.

  • DCT encoding (DCT-). The -th value in is mapped to the -th column of the -dimensional DCT matrix [ahmed1974discrete]. For example, 0 is mapped to the first column of the -dimensional DCT matrix.

  • Random orthogonal encoding (RO-). Similar to DCT encoding, the -th value is mapped to the -th column of a

    -dimensional random orthogonal matrix.

Results of OAT with different encoding schemes on CIFAR-10 are shown in Figure 13. As we can see, using encoding generally achieves better SA and RA compared with no encoding. For example, the best SA achievable using RO-16 and RO-128 encoding are 93.16% and 93.68% respectively, which are both much higher than the no encoding counterpart at 92.53%. We empirically find RO-128 encoding achieves good performance and use it as the default encoding scheme in all our experiments.

(a) SA
(b) RA
Figure 13: Results of OAT with different encoding schemes on CIFAR-10.