DeepAI
Log In Sign Up

Regularizing Neural Network Training via Identity-wise Discriminative Feature Suppression

It is well-known that a deep neural network has a strong fitting capability and can easily achieve a low training error even with randomly assigned class labels. When the number of training samples is small, or the class labels are noisy, networks tend to memorize patterns specific to individual instances to minimize the training error. This leads to the issue of overfitting and poor generalisation performance. This paper explores a remedy by suppressing the network's tendency to rely on instance-specific patterns for empirical error minimisation. The proposed method is based on an adversarial training framework. It suppresses features that can be utilized to identify individual instances among samples within each class. This leads to classifiers only using features that are both discriminative across classes and common within each class. We call our method Adversarial Suppression of Identity Features (ASIF), and demonstrate the usefulness of this technique in boosting generalisation accuracy when faced with small datasets or noisy labels. Our source code is available.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/06/2017

Neural Networks Regularization Through Class-wise Invariant Representation Learning

Training deep neural networks is known to require a large number of trai...
05/02/2022

Enhancing Adversarial Training with Feature Separability

Deep Neural Network (DNN) are vulnerable to adversarial attacks. As a co...
02/01/2021

Learning to Combat Noisy Labels via Classification Margins

A deep neural network trained on noisy labels is known to quickly lose i...
06/03/2021

Exploring Memorization in Adversarial Training

It is well known that deep learning models have a propensity for fitting...
03/25/2021

Deepfake Forensics via An Adversarial Game

With the progress in AI-based facial forgery (i.e., deepfake), people ar...
03/03/2021

Group-wise Inhibition based Feature Regularization for Robust Classification

The vanilla convolutional neural network (CNN) is vulnerable to images w...
02/04/2019

Deep One-Class Classification Using Data Splitting

This paper introduces a generic method which enables to use conventional...

I Introduction

In recent years, deep neural networks (DNNs) have grown in size from 3,246 trainable parameters in 1989 (LeNet [LetNet]) to tens of millions of parameters (AlexNet [AlexNet], ResNet [he2015resnet]). This has led to a massive increase in networks’ ability to capture complex patterns from input data. However, this has also led to the risk of overfitting training data. As an extreme case, Zhang et al. [Overfitting] showed that any sufficiently large network can memorize the labels for instances in a dataset, even if they are randomly assigned. This effect is more pronounced when there is insufficient training data for a network or the labels associated with the data are noisy. The latter case is common when training data and their annotations are automatically crawled from the Internet.

Conceptually, an instance of training data could have two types of features:

  • Class-wise Discriminative Features: These are features that are useful for determining the class that each sample belongs to. For example, a picture of a cat may be identified as such by the presence of whiskers.

  • Identity-wise Discriminative Features: These are features that are useful for determining precisely which sample is which. For example, the length and number of whiskers would help to identify a specific cat.

A deep neural network can use both features to perform empirical risk minimization. However, the second type of feature is less likely to generalise well to unseen data. Relying on it could cause overfitting.

Motivated by the above insight, this paper proposes a method to encourage the learning of Class-wise Features whilst discouraging the learning of Identity-wise Features. Intuitively, this leads to classifiers only using those features that are both discriminative across classes and common within each class. We do this by assigning each sample a unique ID and training a network that can classify samples whilst failing to determine the individual identity of each sample. This proposed process is performed adversarially in a manner akin to the DANN method [ganin2015domainadversarial, ganin2015unsupervised] in domain adaptation. DANN maintains a domain classifier that can identify the domain of the input data while adversarially learning features that can reduce the discriminative power of the domain classifier. In our case, we essentially treat each individual sample as its own domain.

In order to perform adversarial training, it was necessary to implement a gradient reversal layer (GRL) as described by Ganin et al. [ganin2015domainadversarial]

. However, proper adversarial training requires a balance between the regular backpropagation of the classification task and the reversed backpropagation of the sample identification task. This requires careful fine-tuning. While it is possible to discover a training schedule that maintains this balance, it is tricky and subject to error. We therefore also propose a method called Dynamic Gradient Reversal (DGR), which requires no tuning. Experimentation shows that DGR can be used anywhere a GRL is used.

We further explore two use cases for ASIF. The first one is to improve the generalisation performance of a deep neural network when training with a small amount of data. The second one is to increase the resilience against inaccurately labelled training data. In the second case, we show that the proposed method can be directly applied as a regularization approach or can be used to identify the incorrect class labels.

In short, the main contributions of this paper are:

  • A technique to reduce the tendency for large networks to overfit to specific samples in the training data.

  • A training method that maintains or improves accuracy while reducing the number of class-variant features to the absolute minimum.

  • A gradient reversal algorithm that can be used anywhere a DANN-style Gradient Reversal Layer is used.

The source code behind these experiments is available at https://github.com/avichapman/identity-feature-suppression.

The rest of the paper is organised as follows. In Sect. II, we provide further background in the problem area. We then describe the methodology behind our approach in Sect. III. In Sect. IV, we describe the experiments performed to explore the characteristics of ASIF. Finally, in Sect. V we summarise our results and suggest fruitful directions for future work.

Ii Related Works

Ii-a Domain Adaptation

Domain Adaptation is a family of techniques for learning a task on data from one domain and applying that task to another domain [Wilson2020Survey]. Pan and Yang [PanSurvey2010]

define a ’domain’ as consisting of a feature space and a marginal probability distribution of the features across the population.

Domain Adaptation works on the assumption that the feature space stays the same. That is, the things about a data sample that can be measured never change. For example, in any given country it is possible to measure the probability of a pregnancy resulting in fraternal twins. However, the marginal probability distribution between domains can vary widely. Applied to the aforementioned example, this would mean that the probability of fraternal twins in one country may be very different from that same measurement in another country.

Several recent works [Dredze2009MultidomainLB, joshi-etal-2012-multi, hassan2018unsupervised] have attempted to take advantage of data across all domains to learn to predict within a single domain. Sebag et al. [schoenauersebag2019multidomain] attempted to use adversarial learning to learn the distributions of the various domains in the training set.

Wilson, Doppa and Cook [Wilson2020]

proposed a method called Convolutional deep Domain Adaptation model for Time Series data (CoDATS), which took advantage of adversarial learning to encourage a feature extractor network to learn features that were domain invariant. The CoDATS network consisted of three parts: a feature extractor, a classifier and a domain predictor. The classifier was trained against labelled data for a source domain. At the same time, the domain predictor was trained to predict whether a given sample belonged to the source domain or a target domain. They used a gradient reversal layer (DGR) to create an adversarial relationship between the two tasks. The DGR worked by passing a feature vector through untouched when feeding forward and reversing the gradient when backpropagating.

Using this technique, Wilson et al. were able to demonstrate superior classification on the target domain, despite the network having never seen labels for the target domain. However, this technique is limited in that it requires each domain to have a label, which doesn’t work in blurry edge cases or when the target domain is unknown at training time.

Ii-B Learning from Noisy Labels

In real life datasets, perfect label accuracy is unrealistic. Human annotators make errors due to fatigue and other considerations. Moreover, labels are often obtained through means such as Amazon’s Mechanical Turk or as pseudo-labels generated via semi-supervised means.

Referred to as noisy labels, these inaccurate labels can have a deleterious effect on training accuracy. Zhang et al. [DBLP:journals/corr/ZhangBHRV16] demonstrated that a sufficiently complex DNN could learn a dataset with an arbitrarily high level of label noise. This overfitting behaviour negatively affects performance when evaluated on test data.

Frénay and Verleysen [FrenayLabelNoiseSurvey] divided label noise into two types: Instance-independent and Instance-dependent.

Instance-independent label noise is characterised by the lack of a probabilistic relationship between a label being wrong and the underlying features of a given sample. In the literature, this is often further sub-divided into Symmetric and Asymmetric noise.

  • Symmetric noisy labels are modelled by randomly changing label values from their true class to some other class with a certain probability.

  • Asymmetric label noise is modelled by randomly changing the labels for samples of certain classes to a similar class with a given probability. An example of this would be changing ’cars’ to ’trucks’ in the CIFAR10 dataset. This is slightly more realistic.

Instance-dependent label noise, on the other hand, varies with the particular characteristics of each sample. This mirrors reality, in that more ambiguous samples are more likely to be misclassified.

Chen et al. [Chen2021InstanceDependentNoise]

described a method for producing realistic instance-dependent noise. Their method involved training a DNN on a dataset for a certain number of epochs and recording the associated loss of each sample. The samples with the highest averaged loss were deemed to be the most counter-intuitive samples and therefore the most likely to be mislabelled in real life. While this does not work as an online means of determining misclassification likelihood, it is more than sufficient when the training data is known beforehand.

DNNs tend to learn general classification rules first before proceeding to memorise individual samples in a dataset. This means that early in training, a sample’s classification loss can be used as an indication of whether the sample’s label is incorrect. A large loss may indicate a label is wrong. This is often referred to as the small-loss trick. Han et al. [Han2018CoTeaching], Jiang et al. [Jiang2018Mentornet] and Yu et al. [Yu2019Disagreement] all utilise the small-loss trick to detect noisy labels.

A popular method for combating overfitting to noisy labels is to change the rate at which DNNs learn instance-specific cases by modifying the loss function. This means that samples with an abnormally high loss (and thus likely to be wrongly labelled) are ignored or have reduced affect on the training outcome. Zhang et al.

[GCE] proposed the Generalised Cross Entropy loss, which bridged the gap between Cross Entropy loss and the MAE/unhinged loss. Menon et al. [PHUber]

went further and proposed a partially Huberised Cross Entropy loss, which utilized gradient clipping to arrive at a more robust training solution. Both of these methods are included for comparison in this paper.

Iii Method

This section describes the proposed ASIF training method in further detail.

Iii-a Notations and Definitions

The following notation will be used in this paper:

  • : The total number of samples being trained on.

  • : The total number of classes in the dataset.

  • : The total number of samples of a given class.

  • : The total number of samples in each mini-batch.

  • : The amount of label noise, 0-1.

  • : The identification task loss associated with class .

  • : The classification task loss.

  • : The Feature Extractor.

  • : The dimension of the feature vector output from .

  • : The Identifier module.

  • : The Classifier module.

Iii-B Method Description

As discussed in Section I, ASIF attempts to overcome the problem of networks overfitting to specific instances in the training data. To that end, it attempts to select Class-wise Features while suppressing Identity-wise Features. Recall that the former are features useful in classifying a sample into one of several classes, while the latter features are useful in determining the identity of a specific sample.

ASIF attempts to perform both tasks: Classification and Identification. Classification divides the dataset into classes and attempts to determine the class that each sample belongs to. Similarly, Identification divides the dataset into ’identities’ (one for each sample) and attempts to ascertain the identity of each and every sample. The module contains global parameters as well as parameters that are tuned for each class to perform identification amongst the individual samples within that class. A gradient reversal layer exists to encourage the network to fail in its Identification task.

Iii-C Network Structure

Fig. 1: Architecture of our proposed ASIF network. (a) A shared Feature Extractor extracts features for use by all downstream tasks. A Classifier module performs the Classification task, while an Identifier module performs the Identification task. (b) The Identifier module contains shared parameters that are trained on all samples, as well as dedicated parameters for each class of samples producing outputs, one for each class. There is a Dynamic Gradient Reversal (DGR) layer between the shared and per-class parameters.

As Figure 1 indicates, the ASIF network consists of several components:

Iii-C1 Feature Extractor

The Feature Extractor () can be any off-the-shelf network. For the purposes of the experiments we’ve done as part of this paper, we used ResNet18 [he2015resnet]. For ease of base-lining, we used the ResNet18 implementation provided as part of Nishi et al.’s [Nishi2021Augmentation] work.

The output of the feature extractor is a feature vector , where . This is passed to a linear layer , the ’Classifier’ in Figure 1

(a), to extract logits for use with the

Classification task. The logits are combined with the classification label using a Cross Entropy loss (). Note that the loss applied here can be varied. Investigation of other losses for the Classification task is left to future work.

Iii-C2 Identification Module

The Identification Module attempts to identify individual samples. It has outputs, one for each class, where .

As shown in Figure 1(b), the module is divided into three parts: a ’public’ part whose parameters are trained on all samples regardless of label, a DGR, and sets of private parameters that are only trained on samples with the matching label. Each set of private parameters outputs one of the outputs .

By sharing as many parameters as possible between the classes, we allow for a network that more easily scales to a larger number of classes.

The ’public’ section of the Identifier consists of two linear layers with a Batch Normalization, ReLU and dropout in between. The ’private’ section has a Batch Normalization, ReLU, dropout and final linear layer for each class. All hyperparameters, such as dropout levels, were manually optimised using cross-validation.

Each mini-batch output from the Identifier is filtered down to just the batch members from its class and a Cross-Entropy Loss () is applied. The losses are then averaged in proportion to each class’ share of the mini batch and combined with the classification loss. This structure allows the ASIF network to optimise:

(1)

where denotes the Identifier output for class . It also includes a coefficient for the Identifier losses. The values used in the experiments can be found in the appendix.

Iii-C3 Dynamic Gradient Reversal Layers

Similar to DANN [ganin2015domainadversarial], we reverse the gradient during backpropagation from the Identification task. To review, DANN contains a Gradient Reversal Layer (GRL) which, on feeding forward has the value . However, when backpropagating it has the value , where

is the identity matrix and

is a hyperparameter. The value of is set according to a schedule tuned to ensure a proper balance is maintained between the competing tasks. Unfortunately, choosing an incorrect value for leads to the Identification task becoming confidently wrong.

To overcome this limitation, this paper proposes a Dynamic Gradient Reversal (DGR) scheme. Unlike DANN, our method does not require a tune-able hyper-parameter. Moreover, our experiment shows that DGR alleviates overfitting better when compared with DANN-like gradient reversal layers, as shown in Figure 2.

Fig. 2: Classification loss, training and test accuracy versus training epoch when training with Symmetrical 80% noise and CIFAR10. The use of Dynamic Gradient Reversal (DGR) leads to reduced overfitting, as indicated by the classification loss not dropping as fast. Also note that the DGR training and test accuracies remain in lockstep, while the DANN accuracies diverge during later training.
while Still Training do
     Perform Identification Task
     Backpropagate using
     Record
     
end while
Algorithm 1 Dynamic Gradient Reversal

Formally, Dynamic Gradient Reversal (DGR) is designed to maintain the Identification Task in a state of maximum uncertainty. This is done by establishing a baseline ideal loss () corresponding to maximum entropy in the identity logits as shown in Equation 2:

(2)

Then we automatically choose by comparing the current and . The detailed process of calculating is described in Algorithm 1. The algorithm has the same computational cost as DANN, since the tuning of the value is done using the loss, which is computed anyway.

Iii-D Noisy Label Detection

One of the challenges of obtaining real-life data is the difficulty of creating accurate labels. It is typical to use a large human workforce and/or some degree of automation (for example, scraping the Internet) to acquire the labels. Inaccurate (or ’noisy’) labels often become associated with the training data. This can lead to reduced accuracy in trained networks.

To detect noisy labels, the small-loss trick [Han2018CoTeaching, Jiang2018Mentornet, Yu2019Disagreement] can be applied, which is described above in Section II-B. DNNs learn general cases first and high losses indicate tricky cases or bad labels. Given ASIF’s regularising effect during training, the feature extractor will take much longer to overfit to the edge cases. We thus theorise that the small-loss trick applied to the Classification Task’s loss , would be far more robust when trained with ASIF than without.

Iv Experiments

In this section, we evaluate the performance of ASIF using reduced training sets and noisy labels. We also delve into the characteristics of ASIF. Experiments were run using both the CIFAR10 and Fashion-MNIST

[FashionMNIST] datasets. All training techniques are judged based on their macro F1 scores when classifying on the test set.

The CIFAR10 dataset contains 50,000 small 32x32 sample images, split evenly across ten classes. It contains a further 10,000 images for evaluation - 1000 per class. The Fashion-MNIST dataset contains 60,000 small 28x28 grayscale images of fashion products, split evenly across ten classes. It contains a further 10,000 images for evaluation.

To serve as a point of comparison, all experimental configurations were also run with several other methods. In these non-ASIF cases, the Identifier module was removed from the network. Three different losses were applied to the output logits from the Classifier:

  • CE: Cross-Entropy. This is the same as used in the ASIF experiments.

  • GCE: Generalised Cross-Entropy [GCE]

  • PHuber: partially Huberised Cross-Entropy [PHUber]

Iv-a Reduced Training Sets

To investigate ASIF’s ability to resist overfitting with small training sets, we designed an experiment which trained the network on the CIFAR10 dataset with various numbers of samples per class.

The inputs to all techniques were the same, with standard regularisation applied, but no data augmentation. Runs were conducted with = [10k, 20k, 30k, 40k, all]. Each configuration was run three times and their macro F1 scores across the test set averaged.

CE GCE PHuber ASIF
10k 69.1 ±  1.2 64.8 ±  2.4 73.2 ±  0.6 74.2 ±  0.2
20k 74.0 ±  0.5 72.6 ±  1.1 79.4 ±  0.0 80.2 ±  0.1
30k 78.7 ±  0.4 75.6 ±  0.7 82.1 ±  0.1 83.6 ±  0.3
40k 81.5 ±  0.7 80.6 ±  0.1 83.4 ±  0.1 85.5 ±  0.3
50k 84.3 ±  0.6 83.0 ±  0.4 83.9 ±  0.4 86.8 ±  0.4
TABLE I: Macro F1 Scores when training on reduced training sets on CIFAR10.

We discovered that ASIF has a marked advantage over its competitors in this domain. Table I summarises the results for CIFAR10. Fashion-MNIST results can be found in Table IX in the appendix.

Iv-B Training with Label Noise

To test ASIF in the presence of label noise, we intentionally modified the labels in the datasets. Two types of label noise were investigated.

The first was Symmetrical instance-invariant noise, as described by Zhang et al. [DBLP:journals/corr/ZhangBHRV16] and by Zhang and Sabuncu [DBLP:journals/corr/abs-1805-07836]. To apply this sort of noise, a percentage of the dataset corresponding to the desired noise level were selected for label modification.

Symmetrical instance-invariant noise is not realistic, since labelling errors are more likely to happen with ambiguous samples than with obviously distinct ones. We used the technique described by Chen et al. [Chen2021InstanceDependentNoise] to produce realistic instance-dependent noise. Their process produced a list of samples and associated average losses. We ranked the samples by descending loss and selected the top samples. Those samples then had their labels randomly swapped.

Having trained ASIF against six values of and two different noise techniques, we have been able to show superior performance in the very high noise domain around 70%-90% noise. Figure 3 clearly illustrates ASIF’s accuracy advantage in that area with CIFAR10.

Fig. 3: CIFAR10 results. ASIF confers a clear accuracy improvement over the baseline training method (Cross Entropy) with noisy labels, especially in the high-noise end of the range. GCE and PHuber training results are included for comparison.

In the easier scenario of Symmetric Instance-Invariant noise, PHuber is still competitive with ASIF. However, even in those cases, ASIF beats CE and GCE by a wide margin. Please see Tables II and III for CIFAR10 full results. ASIF’s corresponding performance with Fashion-MNIST can be found in Tables X and XI the appendix.

CE GCE PHuber ASIF
0 84.3 ±  0.6 83.0 ±  0.4 83.9 ±  0.4 86.8 ±  0.4
0.2 79.5 ±  0.5 79.1 ±  0.1 80.5 ±  0.3 81.5 ±  0.0
0.4 70.4 ±  0.3 70.5 ±  0.1 74.2 ±  0.4 73.1 ±  0.6
0.6 59.2 ±  0.7 58.4 ±  0.3 65.7 ±  0.1 65.6 ±  1.9
0.7 52.8 ±  1.0 50.9 ±  0.6 44.1 ±  0.6 61.6 ±  0.9
0.8 45.3 ±  1.4 42.4 ±  0.5 34.0 ±  0.2 53.5 ±  1.0
0.9 38.2 ±  0.5 31.5 ±  0.9 27.4 ±  2.3 41.8 ±  1.3
TABLE II: Macro F1 Scores with Instance-Dependent Noisy Labels on CIFAR10.
CE GCE PHuber ASIF
0 84.3 ±  0.6 83.0 ±  0.4 83.9 ±  0.4 86.8 ±  0.4
0.2 65.5 ±  0.5 75.1 ±  0.4 81.8 ±  0.1 77.6 ±  0.7
0.4 49.0 ±  2.3 60.3 ±  1.1 78.6 ±  0.8 72.0 ±  0.4
0.6 41.0 ±  1.1 48.3 ±  1.3 73.5 ±  0.3 63.3 ±  1.9
0.7 29.7 ±  1.1 40.8 ±  0.5 67.3 ±  1.3 55.6 ±  0.5
0.8 21.4 ±  0.7 29.3 ±  0.9 53.6 ±  4.4 43.1 ±  2.4
0.9 5.0 ±  2.9 16.6 ±  0.5 17.7 ±  3.1 27.3 ±  2.6
TABLE III: Macro F1 Scores with Symmetric Instance-Invariant Noisy Labels on CIFAR10.

Iv-C Detecting Incorrect Labels

In addition to showing that ASIF confers an advantage when training in the face of noisy labels, we can also show that ASIF can help to detect which samples have bad labels.

To do so, periodically through the training we recorded the associated values for each sample and then ranked them by descending value. The top samples were selected as ’probably false’. These samples’ labels were then compared with the true labels to determine the dirty label picking balanced accuracy.

Our results indicate that utilising ASIF to pick out bad labels can confer as much as a 10% advantage over the baseline CE loss. This is a significant improvement over both GCE and PHuber. In the easier scenario of Symmetric Instance-Invariant noise, ASIF shows a detection advantage of as much as 13% over CE.

Please see Tables IV and V for full results.

CE GCE PHuber ASIF
0.2 64.2 ±  0.8 65.0 ±  1.0 67.6 ±  1.1 70.0 ±  1.6
0.4 73.4 ±  0.5 70.5 ±  1.4 72.6 ±  5.0 78.0 ±  1.6
0.6 73.2 ±  1.6 67.7 ±  0.5 74.5 ±  0.6 79.7 ±  3.3
0.7 69.7 ±  1.6 66.2 ±  1.1 64.5 ±  3.1 79.6 ±  2.0
0.8 65.9 ±  0.7 60.7 ±  0.7 62.9 ±  3.6 71.2 ±  1.2
0.9 58.1 ±  2.0 58.3 ±  0.9 61.1 ±  0.6 61.9 ±  3.0
TABLE IV: F1 Score of Instance-Dependent Noisy Label Detection on CIFAR10
CE GCE PHuber ASIF
0.2 79.1 ±  3.1 86.7 ±  0.6 82.0 ±  2.0 85.6 ±  0.7
0.4 75.9 ±  0.2 83.7 ±  0.3 84.6 ±  0.5 86.2 ±  1.3
0.6 73.3 ±  0.9 77.6 ±  1.1 83.1 ±  1.4 84.1 ±  0.4
0.7 67.1 ±  1.6 73.2 ±  1.1 79.7 ±  3.2 79.5 ±  1.3
0.8 60.7 ±  0.4 65.7 ±  0.7 71.7 ±  0.6 73.2 ±  2.0
0.9 52.2 ±  2.2 54.8 ±  0.9 59.4 ±  2.0 62.9 ±  0.9
TABLE V: F1 Score of Symmetric Instance-Invariant Noisy Label Detection on CIFAR10

Iv-D Analysis of the Features Learned With ASIF

We conducted an analysis to gain a better understanding of the effect ASIF has on the features learned. First, we address the degree to which ASIF inhibits memorisation of features. The theory behind ASIF is that we penalise the learning of features that can be used to identify specific instances of data, while allowing the learning of features that are necessary to differentiate between classes of data. We tested this by training a single layer classifier on the feature vectors obtained from the frozen pre-trained feature extractor until performance stopped improving. The goal was correctly identifying each individual sample. If ASIF performed as theorised, the best loss obtained would be worse for ASIF than it would be for the baseline. The results, shown in Table VI, confirm this supposition.

Dataset CE ASIF
CIFAR10 0.005 ±  0.008 0.782 ±  0.159
Fashion-MNIST 1.059 ±  0.332 8.307 ±  0.202
TABLE VI: Loss Obtained identifying feature vectors

Second, we turn our attention to feature dis-entanglement. There is a tension behind the two goals of class discrimination and instance discrimination, since features can be used toward both ends. We hypothesize that the result of this struggle would be that only the fewest, most useful features would be selected for classification. This would act to dis-entangle class-wise and identity-wise features, with class-variant features confined to a small subspace and the rest of the features showing no significant statistical differences between classes.

We set out to investigate the distribution of the features selected by ASIF. To do so, we performed an experiment. We trained a fresh linear classifier

on the feature vectors obtained from the frozen pre-trained feature extractor . We then used the absolute value of the linear weights to ascertain which feature dimensions were least important to the classification and removed those from the vectors, before retraining another fresh classifier on the resulting vectors. We performed this process repeatedly, reducing the vector size with each iteration until the resulting vectors had only five dimensions. With each iteration, we recorded the best classification accuracy obtained.

Figure 4 shows the results obtained from this experiment. In all cases, the majority of the features played no important role in classification. However, as the least important features continued to be removed, the Cross Entropy baseline became the first to show a reduced accuracy when there were around 55 feature dimensions remaining.

Fig. 4: The accuracy obtained when training a single-layer classifier on the output of feature extractors trained with Cross Entropy Loss (Red) and ASIF (Blue) using between 1 and 100 of the most important features.

ASIF, on the other hand, does not seriously lose accuracy until we are down to the ten most important feature dimensions. This feature compression is desirable as it has been linked to generalization [tishby2015deep, shwartzziv2017opening].

V Conclusions and Future Work

We have introduced ASIF, which differentiates between Class-wise and Identity-wise Discriminative Features, promoting the former whilst suppressing the latter. Through experimentation, we have shown that ASIF has a regularising effect that can reduce overfitting to individual samples and increase robustness against label noise. We have shown that ASIF can result in accuracies that are better than standard training results, especially in the high-noise domain. Moreover, we have also proposed the DGR method which allows for a gradient reversal layer without tune-able hyper-parameters.

We have shown that the use of ASIF results in the selection of far fewer class-variant features. This feature of ASIF-trained feature extractors has great potential for use in Data Privacy and Anonymization applications, or anywhere else that autoencoders are useful.

Moreover, when training on the full dataset, the selected features are likely to be domain invariant and lead to much better generalisation. Extending the Identification task to the unsupervised realm may encourage more domain invariant feature selection and seems like a fruitful direction for future research.

ASIF has limitations when it comes to scaling up. Because its Identifier module has a linear layer that specifies individual instances in a fixed dataset, the size of that dataset has a practical upper limit. Because each class has its own set of weights, this memory constraint also places a limit on the number of classes trained. Moreover, while suppressing identity features has been shown to be helpful in the settings laid out in this paper, they are crucial in certain other tasks - such as face recognition. This limits ASIF’s applicability in some areas. Despite these limitations, differentiating between Class-wise and Identity-wise Features is nonetheless a novel approach.

Acknowledgment

This work was supported with supercomputing resources provided by the Phoenix HPC service at the University of Adelaide.

References

Appendix A CIFAR10 ASIF Configurations

Table VII shows the configurations used to produce the CIFAR10 ASIF results.

Noise N LR
Instance 0.2 50k 0.0001 0.01
Instance 0.4 50k 0.0001 0.1
Instance 0.6 50k 0.0001 0.1
Instance 0.7 50k 0.0001 0.1
Instance 0.8 50k 0.0001 0.1
Instance 0.9 50k 0.0001 0.1
Symmetric 0.2 50k 0.0001 10.0
Symmetric 0.4 50k 0.0001 100.0
Symmetric 0.6 50k 0.0001 1.0
Symmetric 0.7 50k 0.0001 1.0
Symmetric 0.8 50k 0.001 1.0
Symmetric 0.9 50k 0.001 10.0
None 0 10k 0.001 1.0
None 0 20k 0.0001 1.0
None 0 30k 0.0001 1.0
None 0 40k 0.0001 1.0
None 0 50K 0.001 1.0
TABLE VII: CIFAR10 ASIF Experimental Configurations.

Appendix B Fashion-MNIST ASIF Configurations

Table VIII shows the configurations used to produce the Fashion-MNIST ASIF results.

Noise N LR
Instance 0.2 60k 0.001 0.001
Instance 0.4 60k 0.0001 0.1
Instance 0.6 60k 0.001 0.001
Instance 0.7 60k 0.001 0.001
Instance 0.8 60k 0.001 1.0
Instance 0.9 60k 0.001 0.001
Symmetric 0.2 60k 0.0001 0.001
Symmetric 0.4 60k 0.001 0.001
Symmetric 0.6 60k 0.001 0.001
Symmetric 0.7 60k 0.001 1.0
Symmetric 0.8 60k 0.0001 10.0
Symmetric 0.9 60k 0.0001 0.1
None 0 10k 0.001 0.001
None 0 20k 0.001 0.001
None 0 30k 0.001 0.001
None 0 40k 0.001 0.001
None 0 60K 0.001 0.01
TABLE VIII: Fashion-MNIST ASIF Experimental Configurations.

Appendix C Fashion-MNIST Detailed Results

Experiments run on CIFAR10 were also performed on the Fashion-MNIST dataset to test repeatability of the results. Unlike in the case of CIFAR10, tests were only run on Cross Entropy and ASIF, not on GCE or PHuber.

Results are shown here:

  • Reduced Datasets: Table IX

  • Symmetrical Instance-Invariant Noise: Table X

  • Instance-Dependent Noise: Table XI

CE ASIF
10k 89.1 ±  0.2 89.6 ±  0.1
20k 90.5 ±  0.1 91.2 ±  0.3
30k 91.4 ±  0.1 91.9 ±  0.1
40k 92.0 ±  0.1 92.5 ±  0.3
60k 93.0 ±  0.0 93.1 ±  0.2
TABLE IX: Macro F1 scores when training on reduced training sets on Fashion-MNIST.
CE ASIF
0 93.0 ±  0.0 93.1 ±  0.2
0.2 88.1 ±  0.0 90.3 ±  0.4
0.4 83.7 ±  0.1 88.8 ±  1.1
0.6 74.9 ±  1.1 86.9 ±  0.8
0.7 66.7 ±  2.2 85.3 ±  0.8
0.8 57.3 ±  4.8 81.4 ±  2.3
0.9 19.4 ±  3.5 73.2 ±  0.4
TABLE X: Macro F1 Scores when training on Fashion-MNIST with Symmetric Instance-Invariant Noisy Labels.
CE ASIF
0 93.0 ±  0.0 93.1 ±  0.2
0.2 85.6 ±  0.2 82.5 ±  4.3
0.4 74.7 ±  0.6 72.4 ±  1.9
0.6 53.4 ±  0.6 58.3 ±  1.5
0.7 43.1 ±  1.0 44.8 ±  1.6
0.8 30.5 ±  0.6 34.3 ±  0.8
0.9 21.4 ±  1.6 24.7 ±  2.4
TABLE XI: Macro F1 Scores when training on Fashion-MNIST with Instance-Dependent Noisy Labels.