Unlabeled Data Improves Adversarial Robustness

05/31/2019 ∙ by Yair Carmon, et al. ∙ berkeley college Stanford University 0

We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning. Theoretically, we revisit the simple Gaussian model of Schmidt et al. that shows a sample complexity gap between standard and robust classification. We prove that this gap does not pertain to labels: a simple semisupervised learning procedure (self-training) achieves robust accuracy using the same number of labels required for standard accuracy. Empirically, we augment CIFAR-10 with 500K unlabeled images sourced from 80 Million Tiny Images and use robust self-training to outperform state-of-the-art robust accuracies by over 5 points in (i) ℓ_∞ robustness against several strong attacks via adversarial training and (ii) certified ℓ_2 and ℓ_∞ robustness via randomized smoothing. On SVHN, adding the dataset's own extra training set with the labels removed provides gains of 4 to 10 points, within 1 point of the gain from using the extra labels as well.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The past few years have seen an intense research interest in making models robust to adversarial examples [36] Yet despite a wide range of proposed defenses, the state-of-the-art in adversarial robustness is far from satisfactory. Recent work points towards sample complexity as a possible reason for the small gains in robustness: Schmidt et al. [34]

show that in a simple model, learning a classifier with non-trivial adversarially robust accuracy requires substantially more samples than achieving good “standard” accuracy. Furthermore, recent empirical work obtains promising gains in robustness via transfer learning of a robust classifier from a larger labeled dataset 

[15]. While both theory and experiments suggest that more training data leads to greater robustness, following this suggestion can be difficult due to the cost of gathering additional data and especially obtaining high-quality labels.

To alleviate the need for carefully labeled data, in this paper we study adversarial robustness through the lens of semi-supervised learning. Our approach is motivated by two basic observations. First, adversarial robustness essentially asks that predictors be stable around naturally occurring inputs. Learning to meet such stability constraint does not inherently require labels. Second, the added requirement of robustness fundamentally alters the regime where semi-supervision is useful. Prior work on semisupervised learning mostly focuses on the regime where labeled data provides only poor accuracy. However, in our adversarial setting the labeled data alone already produce accurate (but not robust) classifiers. We can use such classifiers on the unlabeled data and obtain useful

pseudo-labels, which directly suggests the use of self-training—one of the oldest frameworks for semi-supervised learning [32], which consists of applying a supervised training method on the pseudo-labeled data. We provide theoretical and experimental evidence that self-training is effective for adversarial robustness.

On the theoretical side, we consider the simple -dimensional Gaussian model [34] with -perturbations of magnitude . We scale the model so that labeled examples allow learning a classifier with nontrivial standard accuracy, and roughly examples are necessary for attaining any nontrivial robust accuracy. This implies a sample complexity gap in the high-dimensional regime . In this regime, we prove that self training with unlabeled data and just labels achieves high robust accuracy. Our analysis provides a refined perspective on the sample complexity barrier in this model: the increased sample requirement is exclusively on unlabeled data.

On the empirical side, we propose and experiment with robust self-training (RST), a natural extension of self-training for robustness. RST uses standard supervised training to obtain pseudo-labels and then feeds the pseudo-labeled data into a supervised training algorithm that targets adversarial robustness. We use TRADES [45] for heuristic -robustness, and stability training [46] combined with randomized smoothing [6] for certified -robustness.

For CIFAR-10 [17], we obtain 500K unlabeled images by mining the 80 Million Tiny Images dataset [38] with an image classifier. Using RST on the CIFAR-10 training set augmented with the additional unlabeled data, we outperform state-of-the-art heuristic -robustness against strong iterative attacks by . In terms of certified -robustness, RST outperforms our fully supervised baseline by and beats previous state-of-the-art numbers by . Finally, we also match the state-of-the-art certified -robustness, while improving on the corresponding standard accuracy by over . We show that some natural alternatives such as virtual adversarial training [24] and aggressive data augmentation do not perform as well as RST. We also study the sensitivity of RST to varying data amount and relevance.

Experiments with SVHN show similar gains in robustness with RST on semisupervised data. Here, we apply RST by removing the labels from the 531K extra training data and see increases in robust accuracies compared to the baseline that only uses the labeled 73K training set. Swapping the pseudo-labels for the true SVHN extra labels increases these accuracies by at most one additional percentage point. This confirms that the majority of the benefit from extra data comes from the inputs and not the labels.

Before proceeding to the details of our theoretical results in Section 3, we briefly introduce relevant background in Section 2. Sections 4 and 5 then describe our adversarial self-training approach and provide comprehensive experiments on CIFAR-10 and SVHN. We conclude the paper in Section 6.

2 Setup

Semi-supervised classification task.

We consider the task of mapping input to label . Let denote the underlying distribution of pairs, and let denote its marginal on . Given training data consisting of (i) labeled examples and (ii) unlabeled examples , the goal is to learn a classifier in a model family parameterized by .

Error metrics.

The standard quality metric for classifier

is its error probability,

(1)

We also evaluate classifiers on their performance on adversarially perturbed inputs. In this work, we consider allow perturbations in a norm ball of radius around the input, and define the corresponding robust error probability,

(2)

In this paper we study and .

Self-training.

Consider a supervised learning algorithm that maps a dataset to parameter . Self-training is the straightforward extension of to a semi-supervised setting, and consists of the following two steps. First, obtain an intermediate model , and use it to generate pseudo-labels for . Second, combine the data and pseudo-labels to obtain a final model .

3 Theoretical results

In this section, we consider a simple high-dimensional model studied in [34], which is the only known formal example of an information-theoretic sample complexity gap between standard and robust classification. For this model, we demonstrate the value of unlabeled data—a simple self-training procedure achieves high robust accuracy, when achieving non-trivial robust accuracy using the labeled data alone is impossible.

Gaussian model.

We consider a binary classification task where , , uniform on and

for a vector

and coordinate noise variance

. We are interested in the standard error (

1) and robust error  (2) for perturbations of size .

Parameter setting.

We choose the model parameters to meet the following desiderata: (i) some (difficult to learn) classifier achieves very high robust and standard accuracies, (ii) using examples we can learn a classifier with non-trivial standard accuracy and (iii) we require much more than examples to learn a classifier with nontrivial robust accuracy. As shown in [34], the following parameter setting meets the desiderata,

(3)

When interpreting this setting it is useful to think of as fixed and of as a large number, i.e. a highly overparameterized regime.

3.1 Supervised learning in the Gaussian model

We briefly recapitulate the sample complexity gap described in [34] for the fully supervised setting.

Learning a simple linear classifier.

We consider linear classifiers of the form . Given labeled data , we form the following simple classifier

(4)

We achieve nontrivial standard accuracy using examples; see Section A.2 for proof of the following (as well as detailed rates of convergence).

Proposition 1.

There exists a numerical constant such that for all ,

Moreover, as the following theorem states, no learning algorithm can produce a classifier with nontrivial robust error without observing examples. Thus, a sample complexity gap forms as grows.

Theorem 1 ([34]).

Let be any learning rule mapping a dataset to classifier . Then,

(5)

where the expectation is with respect to the random draw of as well as possible randomization in .

3.2 Semi-supervised learning in the Gaussian model

We now consider the semi-supervised setting with labeled examples and additional unlabeled examples. We apply the self-training methodology described in Section 2 on the simple learning rule (4); our intermediate classifier is , and we generate pseudo-labels for . We then learning rule (4) to obtain our final semi-supervised classifier . The following theorem guarantees that achieves high robust accuracy.

Theorem 2.

There exists a numerical constant such that for , labeled data and additional unlabeled data,

Compared to the fully supervised case, the self-training classifier requires only a constant number more input examples, and roughly fewer labels. Intuitively, the self-trained classifier succeeds because the intermediate classifier produces labels that are (by Proposition 1) correct strictly more often that not. As grows, the noise averages out while a nonzero signal component remains, and so the angle between and goes to zero. By virtue of our parameter scaling, this guarantees very high robust and standard accuracies. We provide a rigorous proof and rates of convergence in Section A.4. We remark that other learning techniques, such as EM and PCA, can also leverage unlabeled data in this model. The self-training procedure we describe is similar to 2 steps of EM [8].

In Section A.5 we study a setting where only of the unlabeled data are relevant to the task, which we model as having a signal component. We show that for any fixed high robust accuracy is still possible, but the required number of relevant examples grows by a factor of . This demonstrates that irrelevant data can significantly impede self-training, but does not stop it completely.

4 Semi-supervised learning of robust neural networks

Existing adversarially robust training methods are designed for the supervised setting. In this section, we use these methods to leverage additional unlabeled data by adapting the self-training framework described in Section 2.

4.1 Robust self-training

Input: Labeled data and unlabeled data

Parameters: Standard loss , robust loss and unlabeled weight

1:Learn by minimizing
2:Generate pseudo-labels for
3:Learn by minimizing
Meta-Algorithm 1 Robust self-training

Meta-Algorithm 1 summarizes robust-self training. In contrast to standard self-training, we use a different supervised learning method in each stage, since the intermediate and the final classifiers have different goals. In particular, the only goal of

is to generate high quality pseudo-labels for the (non-adversarial) unlabeled data. Therefore, we perform standard training in the first stage, and robust training in the second. The hyperparameter

allows us to upweight the labeled data, which in some cases may be more relevant to the task, and will usually have more accurate labels.

4.2 Instantiating robust self-training

Each stage of robust self-training performs supervised learning, allowing us to borrow ideas from the literature on supervised standard and robust training. We consider neural networks of the form

, where

is a probability distribution over the class labels.

Standard loss.

As in common, we use the multi-class logarithmic loss for standard supervised learning,

Robust loss.

For the supervised robust loss, we use a robustness-promoting regularization term proposed in [45] and closely related to earlier proposals in [46, 24, 16]. The robust loss is

(6)

The regularization term111 Zhang et al. [45] write the regularization term , i.e. with rather than taking role of the label, but their open source implementation follows (6). forces predictions to remain stable within , and the hyperparameter balances the robustness and accuracy objectives. We consider two approximation for the maximization in .

  1. [leftmargin = 12pt]

  2. Adversarial training: a heuristic defense via approximate maximization.

    We focus on perturbations and use the projected gradient method to approximate the regularization term of (6),

    (7)

    where is obtained via projected gradient ascent on . Empirically, performing approximate maximization during training is effective in finding classifiers that are robust to a wide range of attacks [23].

  3. Stability training: a certified defense via randomized smoothing.

    Alternatively, we consider stability training [46, 21], where we replace maximization over small perturbations with much larger additive random noise drawn from ,

    (8)

    Let be the classifier obtained by minimizing . At test time, we use the following smoothed classifier.

    (9)

    Improving on previous work [19, 21], Cohen et al. [6] prove that robustness of to large random perturbations (the goal of stability training) implies certified adversarial robustness of the smoothed classifier .

5 Experiments

In this section, we empirically evaluate robust self-training (RST) and show that it leads to consistent and significant improvement in robust accuracy, on both CIFAR-10 [17] and SVHN [43] and with both adversarial () and stability training (). For CIFAR-10, we mine unlabeled data from 80 Million Tiny Images, and study in depth the strengths and limitations of RST. For SVHN, we simulate unlabeled data by removing labels, and show that with RST the harm of removing the labels is small. This indicates that most of the gain comes from additional inputs rather than additional labels. Our experiments build on open source code from  [45, 6].

Evaluating heuristic defenses.

We evaluate and other heuristic defenses on their performance against the strongest known attacks, namely the projected gradient method [23], denoted PG and the Carlini-Wagner attack [4] denoted CW.

Evaluating certified defenses.

For and other models trained against random noise, we evaluate certified robust accuracy of the smoothed classifier against attacks. We perform the certification using the randomized smoothing protocol described in [6], with parameters , , and noise variance .

Evaluating variability.

We repeat training 3 times and report accuracy as X  Y, with X the median across runs and Y half the difference between the minimum and maximum.

5.1 Cifar-10

5.1.1 Sourcing unlabeled data

To obtain unlabeled data distributed similarly to the CIFAR-10 images, we use the 80 Million Tiny Images (80M-TI) dataset [38], of which CIFAR-10 is a manually labeled subset. However, most images in 80M-TI do not correspond to CIFAR-10 image categories. To select relevant images, we train an 11-way classifier to distinguish CIFAR-10 classes and an 11 ‘non-CIFAR-10’ class using a Wide ResNet 28-10 model [44] (the same as in our experiments below). For each class, we select additional 50K images from 80M-TI using the trained model’s predicted scores222We exclude any image close to the CIFAR-10 test set; see Section B.6 for detail.—this is our 500K images unlabeled which we add to the 50K CIFAR-10 training set when performing RST. We provide a detailed description of the data sourcing process in Section B.6.

Model CW [4] Best attack No attack
63.1 63.1 62.5 64.9 62.5 0.1 89.7 0.1
TRADES [45] 55.8 56.6 55.4 65.0 55.4 84.9
Adv. pre-training [15] 57.4 58.2 57.7 - 57.4 87.1
Madry et al. [23] 45.8 - - 47.8 45.8 87.3
Standard self-training - 0.3 0 - 0 96.4
Table 1: CIFAR-10 test accuracy under different optimization-based attacks of magnitude . Robust self-training (RST) with 500K unlabeled Tiny Images outperforms the state-of-the-art robust models in terms of robustness as well as standard accuracy. Standard self-training with the same data does not provide robustness. A projected gradient attack with 1K restarts reduces the accuracy of this model to 52.9%, evaluated on 10% of the test set [15].
(a) Model acc. at Standard acc. 63.8  0.5 80.7  0.3 58.6  0.4 77.9  0.1 Wong et al. (single) [41] 53.9 68.3 Wong et al. (ensemble) [41] 63.6 64.1 IBP [14] 50.0 70.2 (b)
Figure 1: Certified CIFAR-10 test accuracy under all and attacks. (a) Accuracy vs.  radius, certified via randomized smoothing [6]. Shaded regions indicate variation across 3 runs. Accuracy at radius 0.435 implies accuracy at radius 2/255. (b) Comparison to the state-of-the-art in certified defense.
5.1.2 Benefit of unlabeled data

We perform robust self-training using the unlabeled data described above. We use a Wide ResNet 28-10 architecture for both the intermediate pseudo-label generator and final robust model. For adversarial training, we compute exactly as in [45] with , and denote the resulting model for . For stability training, we set the additive noise variance to to and denote the result . We provide training details in Section B.1.

Robustness of against strong attacks.

In Table 1, we report the accuracy of and the best models in the literature against various strong attacks at . Section B.3 for details). and correspond to the attacks used in [45] and [23] respectively, and we apply the Carlini-Wagner attack CW [4] on random test examples, where we use the implementation [27] that performs search over attack hyperparameters. We also tune a PG attack against (to maximally reduce its accuracy), which we denote (see Section B.3 for details). and correspond to the attacks used in [45] and [23] respectively.

gains 7% over TRADES [45], which we can directly attribute to the unlabeled data (see Section B.4). In Section C.6 we also show this gain holds over different attack radii. The model of Hendrycks et al. [15]

is based on ImageNet adversarial pretraining and is less directly comparable to ours due to the difference in external data and training method. Finally, we perform standard self-training using the unlabeled data, which offers a moderate 0.4% improvement in standard accuracy over the intermediate model, but is not adversarially robust; see 

Section C.5.

Certified robustness of .

Figure 1a shows the certified robust accuracy as a function of perturbation radius for different models. We compare with [6], which has the highest reported certified accuracy, and , a model that we trained using only the CIFAR-10 training set and the same training configuration as . improves on our by 3–5%. The gains of over the previous state-of-the-art are due to a combination of better architecture, hyperparameters and training objective (see Section B.5). The certified is strong enough to imply state-of-the-art certified robustness via elementary norm bounds. In Figure 1b we compare to the state-of-the-art in certified robustness, showing a a 10% improvement over single models, in performance on par with the cascade approach of [41]. We also outperform the cascade model’s standard accuracy by .

5.1.3 Comparison to alternatives and ablations studies
Consistency-based semisupervised learning (Section c.1).

Virtual adversarial training (VAT), a state-of-the-art method for (standard) semisupervised training of neural network [24, 26], is easily adapted to the adversarially-robust setting. We train models using adversarial- and stability-flavored adaptations of VAT, and compare them to their robust self-training counterparts. We find that the VAT approach offers only limited benefit over fully-supervised robust training, and that robust self-training offers 3–6% higher accuracy.

Data augmentation (Section c.2).

In the low-data/standard accuracy regime, strong data augmentation is competitive against and complementary to semisupervised learning [7, 42], as it effectively increases the sample size by generating different plausible inputs. It is therefore natural to compare state-of-the-art data augmentation (on the labeled data only) to robust self-training. We consider two popular schemes: cutout [10] and AutoAugment [7]. While they provide significant benefit to standard accuracy, both augmentation schemes do not improve performance when combined with robust training.

Relevance of unlabeled data (Section c.3).

The theoretical analysis in Section 3 suggests that self-training performance may degrade significantly in the presence of irrelevant unlabeled data; other semi-supervised learning methods share this sensitivity [26]. In order to measure the effect on robust self-training, we mix out unlabeled data sets with different amounts of random images from 80M-TI and compare the performance of resulting models. We find that stability training is more sensitive than adversarial training, and that both methods still yield noticeable robustness gains with roughly 50% relevant data.

Amount of unlabeled data (Section c.4).

Finally, we perform robust self-training with varying amounts of unlabeled data and make two main observations. We observe that 100K unlabeled data provides roughly half the gain provided by 500K unlabeled data, indicating diminishing returns as data amount grows. However, as we report in Appendix C.4, hyperparameter tuning issues make it difficult to assess how performance trends with data amount.

5.2 Street View House Numbers (SVHN)

The SVHN dataset [43] is naturally split into a core training set of about 73K images, and an ‘extra’ training set with about 531K easier images. In our experiments, we compare three settings: (i) robust training on the core training set only, denoted , (ii) robust self-training with the core training set and the extra training images, denoted , and (iii) robust training on all the SVHN training data, denoted . As in CIFAR-10, we experiment with both adversarial and stability training, so stands for either at or stab.

Beyond validating the benefit of additional data, our SVHN experiments measure the loss inherent in using pseudo-labels in lieu of true labels. Figure 2 summarizes the results: the unlabeled provides significant gains in robust accuracy, and the loss of using pseudo-labels is below 1%. This reaffirms our intuition that in regimes of interests accurate labels are not crucial for improving robustness. We give a detailed account of our SVHN experiments in Appendix D, where we also compare our results to the literature.

Model No attack 75.3  0.4 94.7  0.2 86.0  0.1 97.1  0.1 86.4  0.2 97.5  0.1
Figure 2: SVHN test accuracy for robust training without the extra data, with unlabeled extra (self-training) and with the labeled extra data. Left: Adversarial training and accuracies under attack with . Right: Stability training and certified accuracies as a function of perturbation radius. Most of the gain from extra data comes from the unlabeled.

6 Discussion

6.1 Related work

Semisupervised learning.

In the rich semisupervised literature, a recent successful family of semisupervised approaches enforces consistency in the model’s predictions under various perturbations of the unlabeled data [24, 42], or across training [37, 33, 18]. While some authors show modest gains from variants of self-training [20], the more sophisticated approaches based on consistency are considered to be more successful [26]. However, most previous work on semisupervised learning considers a regime where labeled data is scarce and standard supervised learning cannot get good accuracy. In this work, we consider the very different regime of adversarial robustness, and observe that robust self-training outperforms consistency-based regularization, even though the latter is naturally applicable to a robust setting. We note that there are several other approaches to semisupervised learning like transductive SVMs, graph-based methods, and generative modeling, surveyed in  [5, 47].

Training robust classifiers.

Adversarial examples first appeared in [36], and prompted a host of “defenses” and “attacks”. While several defenses were broken by subsequent attacks [4, 1, 3], the general approach of adversarial training [23, 35, 45] empirically seems to offer gains in robustness. Other lines of work attain certified robustness, though often at a cost to empirical robustness compared to heuristics [29, 40, 30, 41, 14]. Recent work by Hendrycks et al. [15] shows that even though pre-training has limited value for standard accuracy on benchmarks, adversarial pre-training is effective. We complement this work by showing that a similar conclusion holds for semisupervised learning (both practically and theoretically in a stylized model), and extends to certified robustness as well.

Barriers to robustness.

Schmidt et al. [34] show a sample complexity barrier to robustness in a stylized setting. We observed that in this model, unlabeled data is as useful for robustness as labeled data. This observation led us to experiment with robust semisupervised learning. Recent work also suggests other barriers to robustness: Montasser et al. [25] show settings where improper learning and surrogate losses are crucial, in addition to more samples; Bubeck et al. [2] and Degwekar and Vaikuntanathan [9] show possible computational barriers; Gilmer et al. [13] show a high-dimensional model where robustness is a consequence of any non-zero standard error, while Tsipras et al. [39], Fawzi et al. [12]

show a setting where robust and standard errors are at odds. Studying ways to overcome these additional theoretical barriers may translate to more progress in practice.

6.2 Conclusion

We show that unlabeled data closes a sample complexity gap in a stylized model and that robust self-training (RST) is consistently beneficial in practice. Our findings open up a number of avenues for further research. Theoretically, are many labels ever necessary for adversarial robustness? Practically, what is the best way to leverage unlabeled data for robustness, and can semisupervised learning similarly benefit alternative notions of robustness? As data scales grow, computational capacities increase and machine learning moves beyond minimizing average error, we expect unlabeled data to provide continued benefit.

Acknowledgments

YC was supported by the Stanford Graduate Fellowship. AR was supported by Google Fellowship and Open Philanthropy AI Fellowship. PL was supported by the Open Philanthropy Project Award. JCD was supported by the NSF CAREER award 1553086, the Sloan Foundation and ONR-YIP N00014-19-1-2288.

References

Appendix A Theoretical results

a.1 Error probabilities in closed form

We recall our model with uniform on and . Consider a linear classifier . Then the standard error probability is

(10)

where

is the Gaussian error function. For linear classifier , input and label , the strongest adversarial perturbation of with norm moves each coordinate of by . The robust error probability is therefore

(11)

In this model, standard and robust accuracies align in the sense that any highly accurate standard classifier, with , will necessarily also be robust. Moreover, for dense (with

), good linear estimators will typically be dense as well, in which case

determines both standard and robust accuracies. Our analysis will consequently focus on understanding the quantity .

a.1.1 Optimal standard accuracy and parameter setting

We note that for a given problem instance, the classifier that minimizes the standard error is simply . Its standard error is

Recall our parameter setting,

(12)

Under this setting, and we have

Therefore, in the regime , the classifier achieves essentially perfect accuracies, both standard and robust. We will show that estimating from labeled data and a large number () of unlabeled data allows us to approach the performance of , without prior knowledge of .

a.2 Performance of supervised estimator

Given labeled data set we consider the linear classifier given by

In the following lemma we give a tight concentration bound for , which determines the standard and robust error probabilities of via equations (10) and (11) respectively

Lemma 1.

There exist numerical constants such that under parameter setting (12) and ,

Proof.

We have

To lower bound the random variable

we consider its squared inverse, and decompose it as follows

To obtain concentration bounds, we note that

Therefore, standard concentration results give

(13)

Assuming that the two events and hold, we have

Substituting the parameter setting setting (12), we have that for sufficiently large,

for some numerical constant For this to imply the bound stated in the lemma we also need to hold, but this is already implied by

Substituting the parameters settings into the concentration bounds (13), we have by the union bound that the desired upper bound fails to hold with probability at most

for another numerical constant and . ∎

As an immediate corollary to Lemma 1, we obtain the sample complexity upper bounds cited in the main text. See 1

Proof.

For the case we take sufficiently large such that by Lemma 1 we have

for an appropriate . Therefore by the expression (10) for the standard error probability (and the fact that it is never more than 1), we have

for appropriate . Similarly, for the case we apply Lemma 1 combined with to write

with probability . Therefore, using the expression (11) and , we have (using )

for sufficiently large . ∎

a.3 Lower bound

We now briefly explain how to translate the sample complexity lower bound of Schmidt et al. [34] into our parameter setting. See 1

Proof.

The setting of our theorem is identical to that of Theorem 11 in Schmidt et al. [34], which shows that

Using , implies and therefore

Moreover

a.4 Performance of semisupervised estimator

We now consider the semisupervised setting—our primary object of study in this paper. We consider the self-training estimator that in the first stage uses labeled examples to construct

and then uses it to produce pseudo-labels

for the unlabeled data points . In the second and final stage of self-training, we employ the same simple learning rule on the pseudo-labeled data and construct

The following result shows a high-probability bound on , analogous to the one obtained for the fully supervised estimator in Lemma 1 (with different constant factors).

Lemma 2.

There exist numerical constants such that under parameter setting (12) and ,

with probability .

Proof.

The proof follows a similar argument to the one used to prove Lemma 1, except now we have to to take care of the fact that the noise component in is not entirely Gaussian. Let be the indicator that the th pseudo-label is incorrect, so that , and let

We may write the final estimator as

where independent of each other. Defining

we have the decomposition and bound