Log In Sign Up

Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach

by   Kaiwen Yang, et al.

Data augmentation is a critical contributing factor to the success of deep learning but heavily relies on prior domain knowledge which is not always available. Recent works on automatic data augmentation learn a policy to form a sequence of augmentation operations, which are still pre-defined and restricted to limited options. In this paper, we show that a prior-free autonomous data augmentation's objective can be derived from a representation learning principle that aims to preserve the minimum sufficient information of the labels. Given an example, the objective aims at creating a distant "hard positive example" as the augmentation, while still preserving the original label. We then propose a practical surrogate to the objective that can be optimized efficiently and integrated seamlessly into existing methods for a broad class of machine learning tasks, e.g., supervised, semi-supervised, and noisy-label learning. Unlike previous works, our method does not require training an extra generative model but instead leverages the intermediate layer representations of the end-task model for generating data augmentations. In experiments, we show that our method consistently brings non-trivial improvements to the three aforementioned learning tasks from both efficiency and final performance, either or not combined with strong pre-defined augmentations, e.g., on medical images when domain knowledge is unavailable and the existing augmentation techniques perform poorly. Code is available at:


Safe Augmentation: Learning Task-Specific Transformations from Data

Data augmentation is widely used as a part of the training process appli...

Improving Model Generalization by Agreement of Learned Representations from Data Augmentation

Data augmentation reduces the generalization error by forcing a model to...

Graph Data Augmentation for Graph Machine Learning: A Survey

Data augmentation has recently seen increased interest in graph machine ...

Jointly Learnable Data Augmentations for Self-Supervised GNNs

Self-supervised Learning (SSL) aims at learning representations of objec...

Exploring Representation-Level Augmentation for Code Search

Code search, which aims at retrieving the most relevant code fragment fo...

OnlineAugment: Online Data Augmentation with Less Domain Knowledge

Data augmentation is one of the most important tools in training modern ...

DeepPrior++: Improving Fast and Accurate 3D Hand Pose Estimation

DeepPrior is a simple approach based on Deep Learning that predicts the ...

1 Introduction

Data augmentation has emerged as an effective data pre-processing or data transformation step to mitigate overfitting Perez2017TheEO , to encourage local smoothness mixup , and to improve generalization Balestriero2022DAclass

in machine learning pipelines such as deep neural networks. Notably, effective data augmentation, which incorporates class-related data invariance and enriches the in-class sample, is one of the key contributing factors for representation learning with weak or self supervision 

chen2020simple ; FixMatch .

Given a task, we aim to generate “good” augmentations efficiently. As part of the machine learning model pipeline, an autonomous domain-agnostic but task-informed data augmentation mechanism is desirable. However, a number of challenges exist. (1) Existing augmentation operators are usually hand-crafted based on domain expert knowledge, which is not always available in some domain Yang2021MedMNIST . For example, widely used augmentations for natural images are not effective on medical images. Moreover, the performance of those machine learning pipelines drastically varies with different choices of data augmentations. (2) Existing few autonomous augmentation approaches developed lately are neither fully autonomous nor universally applicable to varying domains. Although a few autonomous data augmentation approaches have been developed in recent years AutoAugment ; RandAugment , they train policies to produce a sequence of pre-defined augmentation operations and thus are not fully automated and are limited to a few domains. (3) Existing augmentations usually do not fully utilize the task feedback (i.e., task-agnostic) and may be sub-optimal for the targeted task. A class of automated data augmentation methods train an extra data generative model to generate new augmentations from scratch given a real-world example Antoniou2017DAGAN . However, they require training a generative model, which is a non-trivial task in practice that may either relies on strong prior knowledge or a substantial increased number of training examples.

In this paper, we first investigate the conditions required to generate domain-agnostic but task-informed data augmentations. Consider a representation learning pipeline, we started from a probabilistic graphical model that describes the relations among the label , the nuisance , the example , its augmentation , and the latent representations . We argue that a minimum-sufficient representation for the task preserves the label information but excludes other distractive information from the nuisance. We then investigate the conditions for an augmentation that results in learning such preferred representations. These conditions motivate an optimization objective that can be used to produce automated domain-agnostic but task-informed data augmentations for each example, without replying on pre-defined augmentation operators or specific domain knowledge. Consequently, our proposed optimization objective addresses all aforementioned challenges.

For practicality, we further propose a surrogate of the derived objective that can be efficiently computed from the intermediate-layer representations of the model-in-training. The surrogate is built upon the data likelihood estimation through perceptual distance 

laidlaw2020perceptual defined on the intermediate layers’ representations. Specifically, our proposed surrogate objective maximizes the perceptual distance between and , under a label preserving constraint on the model prediction of . This problem can be efficiently solved by optimizing its Lagaragian relaxation. Thereby, given and its label , the solution to our surrogate objective generates “hard positive examples” for without loosing its label information. Once generated, is used to train the model towards producing the minimum-sufficient representation for the targeted task. Our proposed method, named Label-Preserving Adversarial Auto-Augment (LP-A3)

, does not require any extra generative models such as Generative Adversarial Networks, unlike previous automated augmentation methods 

tian2020makes . We further propose a sharpness-aware criterion selecting only the most informative examples to apply our auto-augmentation on so it does not cause expensive extra computation.

Our proposed LP-A3 is a general and autonomous data augmentation technique applicable to a variety of machine learning tasks, such as supervised, semi-supervised and noisy-label learning. Moreover, we demonstrate that it can be seamlessly integrated with existing algorithms for these tasks and consistently improve their performance. In experiments on the three learning tasks, we equip LP-A3

with existing methods and obtain significant improvement on both the learning efficiency and the final performance. The generated augmentations are optimized for the model-in-training in a target-task-aware manner and thus notably accelerate the slow convergence in computationally intensive tasks such as semi-supervised learning. It is worth noting that our augmentation can consistently bring improvement to tasks without domain knowledge or strong pre-defined augmentations such as medical image classification, on which previous image augmentations lead to performance degeneration.

2 Related work

Hand-crafted vs. Autonomous Data Augmentations. Most of the existing widely used data augmentations are hand-crafted based on domain expert knowledge oord2018representation ; tian2020contrastive ; misra2020self ; chen2020simple ; cubuk2020randaugment . For example, MoCo he2020momentum and InstDis wu2018unsupervised create augmentations by applying a stochastic but pre-defined data augmentation function to the input. CMC tian2020contrastive splits images across color channels. PIRL misra2020self generates data augmentations through random JigSaw Shuffling. CPC oord2018representation renders strong data augmentations by utilizing RandAugment RandAugment , which learns a policy producing a sequence of pre-defined augmentation operations selected from a pool AutoAugment . AdvAA zhang2019adversarial designs a adversarial objective to learn the augmentation policy. RandAugment ; AutoAugment ; zhang2019adversarial are all based on pre-defined operations which is not available in certain domains, and their objective cannot guarantee the label-preserving of the generated data which may lead to suboptimal performance. “InfoMin” principle of data augmentation is proposed tian2020makes to minimize the mutual information between different views (equivalent to ). However, their theory depends on access to a minimal sufficient encoder which may be difficult to obtain. In contrast, we not only consider how to generate optimal views or augmentations, but also consider generating the minimal sufficient representation. The algorithm tian2020makes deploys a generator to render augmentation (which may be costly to train especially on non-natural image domains), while we directly learn the augmentation through gradient descent w.r.t. the input.

Information Theory for Representation Learning. Information theory is introduced in deep learning to measure the quality of representations tishby2015deep ; achille2018emergence . The key idea is to use information bottleneck methods tishby2000information ; tishby2015deep

to encourage the learned representation being minimal sufficient. Mutual information objectives are commonly used in self-supervised learning. For example, InfoMax principle 

linsker1988self used by many works aims to maximize the mutual information between the representation and the input tian2020contrastive ; bachman2019learning ; wu2020importance . But simply maximizing the mutual information does not always lead to a better representation in practice tschannen2019mutual . In contrast, InfoMin principle tian2020makes minimizes the mutual information between different views. Both InfoMax and the InfoMin principles can be associated with our proposed representation learning criteria in Section 4, as they lead to sufficiency and minimality of the learned representation, respectively.

Augmentation in Self-supervised Contrastive Learning. Self-supervised Contrastive Representation Learning oord2018representation ; hjelm2018learning ; wu2018unsupervised ; tian2020contrastive ; sohn2016improved ; chen2020simple learn representation through optimization of a contrastive loss which pulls similar pairs of examples closer while pushing dissimilar example pairs apart. Creating multiple views of each example is crucial for the success of self-supervised contrastive learning. However, most of the data augmentation methods used in generating views, although sophisticated, are hand-crafted or not learning-based. Some use luminance and chrominance decomposition tian2020contrastive , while others use random augmentation from a pool of augmentation operators wu2018unsupervised ; chen2020simple ; bachman2019learning ; he2020momentum ; ye2019unsupervised ; srinivas2020curl ; zhao2021distilling ; zhuang2019local . Recently, adversarial perturbation based augmentation has been proposed to generate more challenging positives/negatives for contrastive learning yang2022identity ; ho2020contrastive .

Augmentation in Semi-supervised Learning. Data augmentation plays an important role in semi-supervised learning, e.g., (1) consistency regularization Consistency ; PiModel enforces the model to produce similar outputs for a sample and its augmentations; (2) pseudo labeling PseudoLabel trains a model using confident predictions produced by itself MeanTeacher for unlabeled data. Data augmentations are critical FixMatch ; ReMixMatch because they determine both the output targets and input signals: (1) accurate pseudo labels are achieved by averaging the predictions over multiple augmentations; (2) weak augmentations (e.g., flip-and-shift) are important to produce confident pseudo labels, while strong augmentations AutoAugment ; RandAugment ) are used to train the model and expand the confidence regions (so more confident pseudo labels can be collected later). Data selection FlexMatch ; zhou2020time for high-quality pseudo labels is also critical and its criterion is estimated on augmentations, e.g., the confidence MixMatch or time-consistency zhou2020time of each sample.

Augmentation in Noisy-label Learning Two primary challenges in noisy-label learning is clean label detection Liu2016ClassificationWN ; han2018co ; jiang2018mentornet and noisy label correction by pseudo labels reed2014training ; arazo2019unsupervised ; Li2020DivideMix . Both significantly depend on the choices of data augmentations since the former usually relies on confidence thresholding and augmentations help rule out the overconfident samples, while the latter relies on the quality of semi-supervised learning. Moreover, as shown in previous works Li2020DivideMix ; zhou2021robust , removing strong augmentations such as RandAugment can considerably degenerate the noisy label learning performance.

3 Preliminaries

Basics of Information Theory Our analyses make frequent use of information theoretical quantities cover1991information

. Given a joint distribution

and its marginal distributions , , we define their entropy as , , and . Furthermore, we define the conditional entropy of given as . Finally, we define the mutual information between and as .

Notations and Problem Setup. In this paper, we use bold capital letters (e.g.,

) to denote random variables, lowercase letters (e.g.,

) to denote their realizations, and curly capital letters (e.g., ) to denote the corresponding sample spaces.

Since we mainly consider supervised and semi-supervised problems, we define Let be the joint distribution of data observation and label , where

is a random vector taking values on a finite observation space

(e.g., images) and

is a discrete random variable taking values on the label space

(e.g., classes). Our goal is to learn a classifier to predict

from an observation .

Task-nuisance Decomposition. To advance the analysis, we decouple the randomness in into two parts, one pertaining to the label and another independent to the label. Concretely, we define a random variable nuisance such that 1) the nuisance is independent to the label , i.e., ; and 2) the observation is a deterministic function of the nuisance and the label , i.e., for some . Lemma 3.1 demonstrates that such a random variable always exists.

Lemma 3.1 (Task-nuisance Decomposition achille2018emergence ).

Given a joint distribution , where is a discrete random variable, we can always find a random variable independent of such that , for some deterministic function .

Remarks. We can rewrite the conditions of task-nuisance decomposition in terms of information theory. 1) Since the nuisance is independent to the label , we have ; and 2) Since the nuisance and the label determines the observation , we have .

4 Principles of Representation Learning: Theoretical Interpretation

4.1 What Is A Good Representation?

In real-world applications, the observation is usually complex in a high-dimensional space , making it hard to directly learn a good classifier for

. To remedy this curse of dimensionality, it is important to learn a good representation of

, i.e., learn an encoder that maps the high-dimensional observation into a low-dimensional representation . We illustrate the process of data generation and representation learning by a probabilistic graphical model as shown in Figure 0(a).

An ideal encoder should keep the important information from (e.g. label-relevant information) and maximally discard the noise or nuisance of , such that it is much easier to learn a classifier from than from . Based on the above intuition, we define an -optimal representation of , which has sufficient information for classifying w.r.t. , while remaining little information of the nuisance.

Definition 4.0.1 (-Minimal Sufficient Representation (-Optimal Representation)).

For a Markov chain

, we say thata representation of is sufficient for if , and is -minimal sufficient for if is sufficient and for all satisfying .

Remark. Due to the property of mutual information, we have . The lower is, the more “minimal” the representation is. When , the representation is minimal sufficient, which is a desirable property as characterized by many prior works tishby2015deep ; achille2018emergence .

Definition 4.0.1 characterizes how good a sufficient representation is, based on how much redundant information is remained. Recall that comes from a deterministic function of label and nuisance . The redundancy of can also be measured by the mutual information between and . Achille et al. achille2018emergence prove that if a representation is sufficient and is invariant to nuisance , i.e., , then is also minimal. However, since is not known, it is hard to directly encourage the representation to be invariant to .

Can we learn an -minimal sufficient representation in a principled way? Inspired by the recent success of data augmentation techniques in self-supervised learning and semi-supervised learning, we find that data augmentation can implicitly encourage the representation to be invariant to the nuisance . However, most augmentation methods are driven by pre-defined transformations, which do not necessarily render a minimal sufficient representation. In the next section, we will analyze the effects of data augmentation in representation learning in details.

(a) Without Augmentation
(b) With Augmentation
Figure 1: Probabilistic graphical models of representation learning.

4.2 Proper Data Augmentation Leads to (Near-)Optimal Representation

In this section, we investigate the role of data augmentation for learning good representations. We first make the following mild assumption on the underlying relationship between and .

Assumption 4.1.

There exists a deterministic function , i.e., .

Assumption 4.1 requires that there exists a “perfect classifier” that identifies the label of the observation with no error, which is common in practice. Note that for data with ambiguity, a tie breaker can be used to map each observation to a unique label. Therefore, Assumption 4.1 is realistic.

Let be a deterministic augmentation function such that is the augmented data, where is a random variable denoting the augmentation selection. For example, if is an image sample, is the augmentation “rotate by 90 degree”, then is the corresponding rotated image sample. We learn an encoder that maps the augmented data to a representation . With this augmentation processes, the graphical model in Figure 0(a) is updated to Figure 0(b).

We show in the theorem below that if the augmentation process preserves the information of , can be sufficient for . Furthermore, if the augmented data contains no information of the original nuisance , will be invariant to and thus will become a minimal sufficient representation.

Theorem 4.2.

Consider label variable , observation variable and nuisance variable satisfying Assumption 4.1. Let be the augmentation variable, be the augmented data, and be the solution to

subject to

Then, is a -minimal sufficient representation of for label if the following conditions hold:
Condition (a): ( is an in-class augmentation) and
Condition (b): ( does not remain much information about ).

Remarks. (1) The objective of learning can be either task-independent (maximizing ), or task-dependent (maximizing ). The former matches the “InfoMax” principle commonly used in self-supervised learning works linsker1988self ; hjelm2018learning , while the latter can be achieved by supervised training (e.g., learning a classifier of for with cross-entropy loss).
(2) When Condition (b) holds for , representation is optimal (minimal sufficient).

Theorem 4.2, proved in Section B.1, shows that if we have a good augmentation that maximally perturbs the label-irrelevant information while keeps the label-relevant information, then the representation learned on the augmented data can be minimal sufficient. Theorem 4.2 serves as a principle of constructing augmentation. Based on this principle, we propose an auto-augment algorithm in Section 5, and verify the algorithm in a wide range tasks in Section 6.

5 Proposed Methods

Figure 2: Network architecture.

In this section, we introduce our data augmentation and how to obtain the augmentation using the representation learning network . Then we show how to plug our augmentation into the representation learning procedure of .

5.1 Label-Preserving Adversarial Auto-Augment (Lp-A3)

As illustrated in the previous section, an ideal data augmentation for representation learning should contain as little information about nuisance as possible while still keeping all the information about class . Since is not observed, we transfer the objective into since and is a constant under the constraint . Thus the optimization problem is:


Implementation of Mutual Information. To solve Equation 2, computing the mutual information terms , and is required. Next, we will show how to compute these terms using a neural net classifier , parameterized by , that consists of two components: a representation encoder and a predictor . Specifically, , where the representation encoder maps input into representation , and the predictor predicts the class of . This is demonstrated in Figure 2.

Constraint implementation. Since and , we can remove the term in both sides and turn the constraint into . Thus we only need to compute the conditional entropy of given or , which can be approximated through the neural net classifier:

, where we use softmax class probability

to approximate the likelihood . And can be computed similarly.

Objective implementation. Then we show how to compute the objective . Since where is not related to and thus can be neglected, we only need to compute . We use the Learned Perceptual Image Patch Similarity (LPIPS) zhang2018unreasonable between and to compute the data likelihood since LPIPS distance is a widely used metric to measure the data similarity in data generative model field johnson2016perceptual ; zhang2020cross and many previous work has shown that LPIPS distance is the best surrogate for human comparisons of similarity zhang2018unreasonable ; laidlaw2020perceptual , compared with any other distance including and distance. Although such surrogate may have error, it worth noting that Theorem 4.2 allows the surrogate to have error. The LPIPS distance is defined by the distance of stacked feature maps from a neural network. Here we use to compute the LPIPS distance. Let has layers and denotes these channel-normalized activations at the -th layer of the network. Next, the activations are normalized again by layer size and flattened into a single vector , where and are the width and height of the activations in layer , respectively. The LPIPS distance between input and the augmentation is then defined as:


Constraint Relaxation for Efficiency.  Now, given an input , its data augmentation can be computed by solving the following optimization problem using the neural network in practice:


The equality constraint in Equation 4 is too strict to solve since it may be inefficient to search for an that exactly satisfies . Thus we relax the constraint with a small and change the constaint into: . It’s worth noting that if is sufficiently small, the label is still well preserved. There is a trade-off to the value of , we search to find a sweet spot where the problem is practical to solve and meanwhile the label is well preserved.

There are many off-the-shelf methods that solve Equation 4, and here we apply the Fast Lagrangian Attack Method laidlaw2020perceptual as a demonstration. We initialize by plus a uniform noise. And we find the optimal by solving the following the Lagrangian multiplier function and gradually scheduling the value of the multiplier :


The detailed procedure of the algorithm can be found in Appendix 2. The algorithm has a similar form as adversarial attack zhang2020principal ; yang2021class in that they both find an optimal augmentation by adding perturbations to the original image . However, the difference is that we aim to generate hard augmentation that preserves the label, while adversarial attack aims to change the class label.

5.2 Plugging Lp-A3 into a Representation Learning Task

One primary advantage of LP-A3 is that it only requires a neural net to produce the augmentation and can be the current representation learning model, so we can plug LP-A3 into any representation learning procedure requiring no additional parameters, which is plug-and-play and parameter-free. At each step, we first fix to generate the augmentation by solving Equation 5 using Algorithm 2. And then we train by running the original representation learning algorithm using our augmentation .

Data selection.  It is not necessary to find hard positives for every sample. To save more computation, we can apply a sharpness-aware criterion, i.e., time-consistency (TCS) zhou2020time , to select the most informative data ( data with the lowest TCS in Algorithm 1) that have sharp loss landscapes, which indicate the existence of nearby hard positives, and we only apply LP-A3 to them. It reduces the computational cost without degrading the performance because (1) the improvement brought by augmentations is limited for examples whose loss already reaches a flat minimum, while the model does not generalize well near examples with a sharp loss landscapes; and (2) the hard positives for examples with flat loss landscape are distant from the original ones and might introduce extra bias to the training.

LP-A3 is compatible with any representation learning task minimizing a loss , which takes in a data batch and a model to output a loss value. here denotes the groundtruth label for labeled data and pseudo label for unlabeled data. The pseudo-code of plugging LP-A3 into the representation learning procedure with TCS-based data selection is provided in Algorithm 1.

0:  Loss for the targeted task ; training data ; neural network ; class preserving margin ; data selection ratio ; learning rate ;
0:  Model parameter trained with LP-A3
1:  while not converged do
2:     Sample batch ;
3:     Data selection: data with the lowest TCS in ;
4:     LP-A3: Freeze and solve Equation 5 using Algorithm 2 for every sample in , resulting in an augmented set of size ;
5:     Learning with LP-A3 augmented data and original data: ;
6:  end while
Algorithm 1 Plug LP-A3 into any representation learning procedure

6 Experiments

In this section, we apply LP-A3 as a data augmentation method to several popular methods for three different learning tasks, i.e., (1) semi-supervised classification; (2) noisy-label learning and (3) medical image classification. In all the experiments, LP-A3 can (1) consistently improve the convergence and test accuracy of existing methods and (2) autonomously produce augmentations that bring non-trivial improvement even without any domain knowledge available. A walk-clock time comparison is given in the Appendix, showing LP-A3 effectively reduces the computational cost. In addition, we conduct a thorough sensitivity study of LP-A3 by changing (1) label-preserving margin and (2) data selection ratio on the three tasks. More experimental details can be found in the Appendix.

Figure 3: Visualization of medical image augmentations on the test set of DermaMnist. Blue (red) bounding box marks the correct (wrong) prediction of a ResNet18 classifier and its confidence on the groundtruth class is reported beneath the box. ScoreCAM wang2020score produces a heatmap highlighting important areas (by yellow color) of an image that a neural net mainly relies on to make the prediction.

6.1 Medical Image Augmentations produced by Lp-A3 vs. RandAugment

We visualize data augmentations generated by LP-A3 and RandAugment RandAugment on the testset of DermaMnist medmnistv2 with a ResNet-18 classifier and its confidence on the groundtruth class in Fig. 3. We also use ScoreCAM wang2020score as an interpretation method to highlight the area in each image that the classifier relies on to make the prediction. we find that LP-A3 preserves relevant derma areas highlighted by ScoreCAM and they are consistent with those in the original image. On the contrary, RandAugment changes the color or occludes those derma areas, resulting in highly different ScoreCAM heatmaps and hence wrong predictions (red bounding box in Fig. 3). Instead, LP-A3 can preserve the class information and mainly perturb the class-unrelated area in the original image.

(a) FixMatch
(b) DivideMix
(c) PES
(d) MedMnist
Figure 4: Convergence Curve when applying LP-A3 to different tasks and baselines.

6.2 Applying Lp-A3 to Three Different Representation Learning Tasks

Here we apply LP-A3 to three different tasks by pluging LP-A3 to existing baselines of each task. Fig. 4 shows that LP-A3 greatly speeds up the convergence of each baseline.

Semi-supervised learning  To evaluate how LP-A3

improves the learning without sufficient labeled data, we conduct experiments on semi-supervised classification on standard benchmarks including CIFAR 


and STL-10 

coates2011analysis where only a very small amount of labels are revealed. We apply LP-A3 in FixMatch sohn2020fixmatch and compare it with the original FixMatch and InfoMin tian2020makes , a learnable augmentation method for semi-supervised learning. Their results are reported in Table 1, where LP-A3 consistently improves FixMatch and the improvement becomes more significant if reducing the labeled data. It’s worth noting that the original FixMatch already employs a carefully designed set of pre-defined augmentations cubuk2020randaugment that have been tuned to achieve the best performance, indicating that LP-A3 is complementary to existing data augmentations. Moreover, LP-A3 also outperforms InfoMin by a large margin (), which indicates that LP-A3 is also superior to existing learnable augmentations.

Dataset CIFAR10 CIFAR100 STL-10
# Label 40 250 4000 400 2500 10000 1000
InfoMin (RGB) tian2020makes - - - - - - 86.0
InfoMin (YDbDr) tian2020makes - - - - - - 87.0
FixMatch sohn2020fixmatch 89.513.14 93.810.29 94.660.13 49.302.45 67.210.94 74.310.35 91.590.16
FixMatch sohn2020fixmatch + LP-A3 92.391.21 94.030.31 95.110.17 56.161.82 72.230.57 77.110.16 92.630.14
Table 1: Semi-supervised Learning performance on CIFAR with different amounts of labeled data. denotes results reproduced using the official code. FixMatch and LP-A3 are trained for SGD steps. InfoMin’s results on CIFAR are missing since their paper only reports the result on STL-10. Error bars (mean and std) are computed over three random trails.

Noisy-label Learning  Data augmentation is critical to noisy-label learning by providing different views of data to prevent neural nets from overfitting to noisy labels. We apply LP-A3 to two state-of-the-art methods DivideMix Li2020DivideMix and PES bai2021understanding on CIFAR with different ratios of noise labels. LP-A3 can consistently improve the performance of these two SoTA methods and the improvement is more significant in more challenging cases with higher noise ratios, e.g., on CIFAR100 with 90% of labels to be noisy, LP-A3 improves PES by (Table 2).

Dataset CIFAR10 CIFAR100
Noise Ratio 50% 80% 90% 50% 80% 90%
Mixup zhang2017mixup 87.1 71.6 52.2 57.3 30.8 14.6
P-correction yi2019probabilistic 88.7 76.5 58.2 56.4 20.7 8.8
M-correlation arazo2019unsupervised 88.8 76.1 58.3 58.0 40.1 14.3
DivideMix Li2020DivideMix 94.4 92.9 75.4 74.2 59.6 31.0
DivideMix+LP-A3 94.890.05 93.700.19 79.351.33 74.120.23 61.000.34 32.550.25
PES bai2021understanding 94.890.12 92.150.23 84.980.36 74.190.23 61.470.38 21.153.15
PES+LP-A3 95.100.14 93.260.21 87.710.36 74.570.25 62.980.49 40.611.10
Table 2: Noisy-label learning performance on CIFAR with different ratios of symmetric label noises. denotes the results reproduced by the official code. Error bars (mean and std) are computed over three random trails.
Method PathMNIST DermaMNIST TissueMNIST BloodMNIST
ResNet-18 94.340.18 76.140.09 68.280.17 96.810.19
ResNet-18+RandAugment 93.520.09 73.710.33 62.030.14 95.000.21
ResNet-18+LP-A3 94.420.24 76.220.27 68.630.14 96.970.06
ResNet-50 94.470.38 75.240.27 69.690.23 96.910.06
ResNet-50+RandAugment 94.020.37 71.650.30 65.130.33 95.140.06
ResNet-50+LP-A3 94.570.07 75.710.22 69.890.08 97.010.32
ResNet-18 78.670.26 94.210.09 91.810.12 81.570.07
ResNet-18+RandAugment 76.000.24 94.180.20 91.380.14 80.520.32
ResNet-18+LP-A3 80.270.54 94.730.21 92.410.22 82.280.38
ResNet-50 78.370.52 94.310.14 91.800.14 81.110.21
ResNet-50+RandAugment 76.630.58 94.590.17 91.100.12 80.470.37
ResNet-50+LP-A3 79.400.36 94.950.19 92.160.23 82.150.08
Table 3: Medical Image Classification on MedMNIST medmnistv2

. All the models are trained for 100 epochs. Error bars (mean and std) are computed over three random trails.

Medical Image Classification To evaluate the performance in specific areas without domain knowledge, we compare LP-A3 with existing data augmentations on medical image classification tasks from MedMNIST medmnistv2 , which is composed of several sub-dataset with various styles of medical images. We compare our LP-A3 with RandAugment cubuk2020randaugment on training ResNet-18 and ResNet-50 he2016deep . We report the results in Table 3, where RandAugment designed for natural images fails to improve the performance in this scenario. In contrast, LP-A3 does not rely on any domain knowledge brings improvement to all the datasets, especially for OctMNIST where the improvement is over . The results indicate that hand-crafted strong data augmentations do not generalize to all domains but LP-A3 can autonomously produce augmentations guided by our representation learning principle without relying on any domain knowledge.

Figure 5: Sensitivity Analysis of label preserving margin and data selection ratio.

6.3 Sensitivity Analysis of Hyperparameters

Label preserving margin : We evaluate how LP-A3 performs with different label preserving margin on the three tasks. The results are presented in Fig. 5, where a reverse U-shape is observed. And LP-A3 using all the evaluated outperforms baselines, which indicates LP-A3 is robust to .

Data selection ratio: We evaluate the performance of LP-A3 with different amount of data selected on the three tasks. As shown in Fig. 5, selecting all the data does not perform the best since some data’ augmentations are useless to apply data augmentation. Moreover, selecting only data to apply LP-A3 can outperform all baselines by a large margin, especially on MedMNIST where the improvement is , which verifies the effectiveness of LP-A3 and our data selection method.

7 Conclusion

In this paper, we study how to automatically generate domain-agnostic but task-informed data augmentations. We first investigate the conditions required for augmentations leading to representations that preserves the task (label) information and then derive an optimization objective for the augmentations. For practicality, we further propose a surrogate of the derived objective that can be efficiently computed from the intermediate-layer representations of the model-in-training. The surrogate is built upon the data likelihood estimation through perceptual distance. This leads to LP-A3, a general and autonomous data augmentation technique applicable to a variety of machine learning tasks, such as supervised, semi-supervised and noisy-label learning. In experiments, we demonstrate that LP-A3 can consistently bring improvement to SoTA methods for different tasks even without domain knowledge. In future work, we will extend LP-A3 to more learning tasks and further improve its efficiency.


This work was supported by, the Major Science and Technology Innovation 2030 “New Generation Artificial Intelligence” key project under Grant 2021ZD0111700, NSFC No. 61872329, No. 62222117, and the Fundamental Research Funds for the Central Universities under contract WK3490000005. Huang, Sun and Su are supported by NSF-IIS-FAI program, DOD-ONR-Office of Naval Research, DOD-DARPA-Defense Advanced Research Projects Agency Guaranteeing AI Robustness against Deception (GARD). Huang is also supported by Adobe, Capital One and JP Morgan faculty fellowships.


  • [1] Alessandro Achille and Stefano Soatto. Emergence of invariance and disentanglement in deep representations. The Journal of Machine Learning Research, 19(1):1947–1980, 2018.
  • [2] Antreas Antoniou, Amos Storkey, and Harrison Edwards. Data augmentation generative adversarial networks, 2017.
  • [3] Eric Arazo, Diego Ortego, Paul Albert, Noel O’Connor, and Kevin McGuinness. Unsupervised label noise modeling and loss correction. In International conference on machine learning, pages 312–321. PMLR, 2019.
  • [4] Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. Advances in neural information processing systems, 32, 2019.
  • [5] Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, and Tongliang Liu. Understanding and improving early stopping for learning with noisy labels. Advances in Neural Information Processing Systems, 34, 2021.
  • [6] Randall Balestriero, Leon Bottou, and Yann LeCun. The effects of regularization and data augmentation are class dependent, 2022.
  • [7] David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In International Conference on Learning Representations (ICLR), 2020.
  • [8] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems 32 (NeurIPS), pages 5050–5060. 2019.
  • [9] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020.
  • [10] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223. JMLR Workshop and Conference Proceedings, 2011.
  • [11] Thomas M Cover and Joy A Thomas. Information theory and statistics. Elements of information theory, 1(1):279–335, 1991.
  • [12] Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space, 2019.
  • [13] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops

    , pages 702–703, 2020.
  • [14] Ekin Dogus Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [15] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in neural information processing systems, pages 8527–8537, 2018.
  • [16] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738, 2020.
  • [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [18] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2018.
  • [19] Chih-Hui Ho and Nuno Nvasconcelos. Contrastive learning with adversarial examples. Advances in Neural Information Processing Systems, 33:17081–17093, 2020.
  • [20] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning, pages 2304–2313, 2018.
  • [21] Justin Johnson, Alexandre Alahi, and Li Fei-Fei.

    Perceptual losses for real-time style transfer and super-resolution.

    In European conference on computer vision, pages 694–711. Springer, 2016.
  • [22] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
  • [23] Alex Kurakin, Chun-Liang Li, Colin Raffel, David Berthelot, Ekin Dogus Cubuk, Han Zhang, Kihyuk Sohn, Nicholas Carlini, and Zizhao Zhang. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In NeurIPS, 2020.
  • [24] Cassidy Laidlaw, Sahil Singla, and Soheil Feizi. Perceptual adversarial robustness: Defense against unseen threat models. arXiv preprint arXiv:2006.12655, 2020.
  • [25] Dong-Hyun Lee. Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks. In ICML Workshop on Challenges in Representation Learning, 2013.
  • [26] Junnan Li, Richard Socher, and Steven CH Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394, 2020.
  • [27] Ralph Linsker. Self-organization in a perceptual network. Computer, 21(3):105–117, 1988.
  • [28] Tongliang Liu and Dacheng Tao. Classification with noisy labels by importance reweighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38:447–461, 2016.
  • [29] Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6707–6717, 2020.
  • [30] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
  • [31] Luis Perez and Jason Wang. The effectiveness of data augmentation in image classification using deep learning. ArXiv, abs/1712.04621, 2017.
  • [32] Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596, 2014.
  • [33] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Advances in Neural Information Processing Systems 29 (NeurIPS), pages 1163–1171. 2016.
  • [34] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Advances in Neural Information Processing Systems 29 (NeurIPS), pages 1163–1171. 2016.
  • [35] Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. Advances in neural information processing systems, 29, 2016.
  • [36] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in Neural Information Processing Systems, 33:596–608, 2020.
  • [37] Aravind Srinivas, Michael Laskin, and Pieter Abbeel. Curl: Contrastive unsupervised representations for reinforcement learning. arXiv preprint arXiv:2004.04136, 2020.
  • [38] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems 30 (NeurIPS), pages 1195–1204. 2017.
  • [39] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In European conference on computer vision, pages 776–794. Springer, 2020.
  • [40] Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning? Advances in Neural Information Processing Systems, 33:6827–6839, 2020.
  • [41] Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000.
  • [42] Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. In 2015 IEEE Information Theory Workshop (ITW), pages 1–5. IEEE, 2015.
  • [43] Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. In International Conference on Learning Representations, 2019.
  • [44] Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu.

    Score-cam: Score-weighted visual explanations for convolutional neural networks.

    In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 24–25, 2020.
  • [45] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929–9939. PMLR, 2020.
  • [46] Mike Wu, Chengxu Zhuang, Daniel Yamins, and Noah Goodman. On the importance of views in unsupervised representation learning. preprint, 3, 2020.
  • [47] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733–3742, 2018.
  • [48] Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan L Yuille, and Quoc V Le. Adversarial examples improve image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 819–828, 2020.
  • [49] Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bilian Ke, Hanspeter Pfister, and Bingbing Ni. Medmnist v2: A large-scale lightweight benchmark for 2d and 3d biomedical image classification, 2021.
  • [50] Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bilian Ke, Hanspeter Pfister, and Bingbing Ni. Medmnist v2: A large-scale lightweight benchmark for 2d and 3d biomedical image classification. arXiv preprint arXiv:2110.14795, 2021.
  • [51] Kaiwen Yang, Tianyi Zhou, Xinmei Tian, and Dacheng Tao. Identity-disentangled adversarial augmentation for self-supervised learning. In International Conference on Machine Learning, pages 25364–25381. PMLR, 2022.
  • [52] Kaiwen Yang, Tianyi Zhou, Yonggang Zhang, Xinmei Tian, and Dacheng Tao. Class-disentanglement and applications in adversarial detection and defense. Advances in Neural Information Processing Systems, 34:16051–16063, 2021.
  • [53] Mang Ye, Xu Zhang, Pong C Yuen, and Shih-Fu Chang. Unsupervised embedding learning via invariant and spreading instance feature. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6210–6219, 2019.
  • [54] Kun Yi and Jianxin Wu. Probabilistic end-to-end noise correction for learning with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7017–7025, 2019.
  • [55] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In NeurIPS, 2021.
  • [56] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
  • [57] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
  • [58] Pan Zhang, Bo Zhang, Dong Chen, Lu Yuan, and Fang Wen. Cross-domain correspondence learning for exemplar-based image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5143–5153, 2020.
  • [59] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang.

    The unreasonable effectiveness of deep features as a perceptual metric.

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018.
  • [60] Xinyu Zhang, Qiang Wang, Jian Zhang, and Zhao Zhong. Adversarial autoaugment. arXiv preprint arXiv:1912.11188, 2019.
  • [61] Yonggang Zhang, Xinmei Tian, Ya Li, Xinchao Wang, and Dacheng Tao. Principal component adversarial example. IEEE Transactions on Image Processing, 29:4804–4815, 2020.
  • [62] Nanxuan Zhao, Zhirong Wu, Rynson WH Lau, and Stephen Lin. Distilling localization for self-supervised representation learning. In 35th AAAI Conference on Artificial Intelligence (AAAI-21), pages 10990–10998. AAAI Press, 2021.
  • [63] Tianyi Zhou, Shengjie Wang, and Jeff Bilmes. Time-consistent self-supervision for semi-supervised learning. In International Conference on Machine Learning, pages 11523–11533. PMLR, 2020.
  • [64] Tianyi Zhou, Shengjie Wang, and Jeff Bilmes. Robust curriculum learning: from clean label detection to noisy label self-correction. In International Conference on Learning Representations, 2021.
  • [65] Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins.

    Local aggregation for unsupervised learning of visual embeddings.

    In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6002–6012, 2019.


  1. For all authors…

    1. Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?

    2. Did you describe the limitations of your work?

    3. Did you discuss any potential negative societal impacts of your work?

    4. Have you read the ethics review guidelines and ensured that your paper conforms to them?

  2. If you are including theoretical results…

    1. Did you state the full set of assumptions of all theoretical results? See Sec.4

    2. Did you include complete proofs of all theoretical results? See Appendix B

  3. If you ran experiments…

    1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? Will open source later

    2. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? See Aappendix 


    3. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?

    4. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?

  4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets…

    1. If your work uses existing assets, did you cite the creators?

    2. Did you mention the license of the assets?

    3. Did you include any new assets either in the supplemental material or as a URL?

    4. Did you discuss whether and how consent was obtained from people whose data you’re using/curating?

    5. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?

  5. If you used crowdsourcing or conducted research with human subjects…

    1. Did you include the full text of instructions given to participants and screenshots, if applicable?

    2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?

    3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?

Supplementary Material

Appendix A Algorithmic Details

a.1 Data Selection Via Time-Consistency

We use time-consistency (TCS) [63] to select informative sample to apply our augmentation, which computes the consistency of the output distribution for each sample along the training procedure. Specifically, TCS metric for an individual sample is an negative exponential moving average of over training history before :



is Kullback–Leibler divergence,

is pesudo label (for unlabeled data) or real label (for labeled data) for at step and is a discount factor. Intuitively, the KL-divergence between output distributions measures how consistent the output is between two consecutive steps, and a moving average of naturally captures inconsistency of over time quantify. And larger means better time-consistency. We select top sample with the lowest TCS to apply our data augmentation because samples with small TCS tend to have sharp loss landscapes. These samples provide more informative gradients than others and applying our model-adaptive data augmentations can bring more improvement to their representation invariance and loss smoothness. In Fig. 5, we conduct a thorough sensistivity analysis on over three tasks and find that sample selection with TCS can effectively improve the performance. Moreover, in this way, we do not need to apply our augmentation to every training samples and thus save the training cost.

a.2 Fast Lagaragian Attack Method

We use the fast lagaragian perceptual attack method (Algorithm 3 in [24]) to solve the Lagragian multiplier function in Equation.(5), which finds the optimal through gradient descent over , starting at with a small amount of noise added. During the gradient descent steps, is increased exponentially form 1 to 10 and the step size is decreased. is set to be 5 for all the experiments.

0:    Training data (,); The class preserving margin ; Neural Network
2:  for  do
9:  end for
Algorithm 2 Fast Lagarangian Attack Method

Appendix B Additional Theoretical Results and Proofs

b.1 Proof of Theorem 4.2

Proof of Theorem 4.2.

Problem (1) contains two versions of objectives for :




Both Problem (8) and Problem (9) lead to the -minimal sufficient representation . We first prove the more challenging Problem (8) objective.

I. For Problem (8): .

We first prove the sufficiency of , then prove the -minimality of .

1) Proof of sufficiency

Since , and does not depend on , we have that the solution to Problem (8), , minimizes under constraint .

Then, we show that also minimizes .

We know that because of the Markovian property. Since , we have


Then we can derive


Equality (13) holds because comes from a deterministic function of and . Since does not depend on , minimizes as it minimizes .

Also, we known that , so we can further obtain


where Equation (19) holds because .

Therefore, minimizes .

Following the similar procedure as above (Equation (10) to Equation (19)), we are able to show that


So also minimizes , which can be further decomposed into . Next we show by contradiction that equals to and thus .

Define Assume that the optimizer minimizes , but does not satisfy sufficiency, i.e., . We will then show that one can construct another representation such that , conflicting with the assumption that minimizes . The construction of works as follows. Since the augmented data satisfies (Condition (a) of Theorem 4.2), we have . Hence, there exists a function such that . Define , then we have


Therefore, the constructed conflicts with the assumption. We can conclude that any optimizer has to satisfy , which is equivalent to . The sufficiency of is thus proven.

As a result, the maximizer to Problem (8), , satisfies .

2) Proof of -Minimality

Since is a deterministic function of and , we have


where the equality in Equation (27) holds because and both hold.

And we can derive


Moreover, we know that


And we have , so


Combining (30) and (32), we have


Note that (33) holds for all sufficient statistics of w.r.t. .

Then we first show that is -minimal of w.r.t. by contradiction.

Assume there exists a random variable satisfying , such that .

Then we have


where (34) holds by replacing with in (28).

Hence, we get which is impossible. So we have that there does not exist such an , and is -minimal representation of w.r.t. .

Then, since we have (thanks to Data Processing Inequality), is also a -minimal sufficient statistic of w.r.t. .

II. For Problem (9): .

Since the objective is to maximize , we only need to show that achieves the maximum mutual information with . According to the above proof for Problem (8), we know that there exist such that and . Hence, the optimizer to Problem (9) must satisfy sufficiency.

The proof of -minimality is identical to the one under Problem (8).

b.2 Additional Theoretical Results on Augmentation Properties

The two conditions in Theorem 4.2, Condition (a) or Condition (b), requires that the augmentation is (a) sufficient and (b) ()-minimal. These two conditions are closely related to some augmentation rationales in prior papers. For example, Wang et al. [45] propose a symmetric augmentation, which can result in Condition (a), as formalized in Lemma B.1 below. Furthermore, Tian et al. [40] propose an “InfoMin” principle of data augmentation, that minimizes the mutual information between different views (equivalent to ). We show by Lemma B.2 that this InfoMin principle leads to the above Condition (b). In contrast, our Theorem 4.2 characterizes two key conditions of augmentation and directly relate them to the optimality of the learned representation.

Lemma B.1 (Sufficiency of Augmentation).

Suppose the original and augmented observations and satisfy the following properties: equationparentequation


Then the augmented observation is sufficient for the label , i.e., .

Proof of Lemma b.1.

where the third equation utilizes the property of symmetric augmentation. ∎

Lemma B.2 (Maximal Insensitivity to Nuisance).

If Assumption 4.1 holds, i.e., , the mutual information can be decomposed as


Since is sufficient, i.e., is a constant, minimizing is equivalent to minimizing .

Lemma B.2 can be obtained by a simple adaptation from Proposition 3.1 by Achille and Soatto [1].

Appendix C Experiments

c.1 Implementation Details

All codes are implemented with Pytorch

111 To train the neural net with LP-A3 augmentation, we apply seperate batch norm layer (BN), i.e., agumented data and normal data use different BN, which is a common strategy used by previous adversarial augmentations [19, 48, 51]. The only hyperparameters for LP-A3 are label preserving margin and data selection ratio , which are tuned for each task according to the results in Sec.6.3.

Semi-supervised learning We reproduce Fixmatch [36] based on public code222 and apply LP-A3 to it. Following [36], we used a Wide-ResNet-28-2 with 1.5M parameters for CIFAR10, WRN-28-8 for CIFAR100, and WRN-37-2 for STL-10. All the models are trained for iterations. is set to 0.002 for CIFAR10 and STL-10, and 0.02 for CIFAR100 and is set to be 90. Since FixMatch only apply data augmentation to those unlabeled data, here LP-A3 is also applied to those unlabeled data as data augmentation. For unlabeled data, label used in LP-A3 is the pesudo label generated by FixMatch algorithm.

Noisy-label learning We reproduce DivideMix [26] and PES [5] based on their official code333, and apply LP-A3 to them as data augmentation. Following [26, 5], we used a ResNet-18 for CIFAR10 and CIFAR100. All the models are trained for 300 epochs. is set to 0.002 for CIFAR10 and 0.02 for CIFAR100, and is set to be 90. All the noise are symmetric noise. For noisy labeled data, label used in LP-A3 is the pesudo label generated by DivideMix or PES algorithm respectively.

Medical Image Classification Here we follow the original training and evaluation protocol of MedMNIST 444 and apply LP-A3 to the training procedure as data augmentation. ResNet-18 and ResNet-50 are trained for 100 epochs with cross-entropy loss on all the multi-class classfication subset of MedMNIST. is set to 0.02 and is tuned from for each dataset. The hyperparameters of RandAugment [13] is set to by following their original paper.

Sensitivity Analysis of Hyperparameters In Figure. 5, the experiments for semi-supervised learning are conducted on CIFAR100 with 2500 labeled data, the experiments for noisy-label learning are conducted on CIFAR100 with 80% noisy label, and the experiments for medical image classification are conducted on DermaMNIST with ResNet50.

Figure 6: Walk-clock time comparison on CIFAR100 with 2500 labeled data.
(a) meadured by LPIPS distance
(b) measured by classification accuracy on