Simple Domain Adaptation with Class Prediction Uncertainty Alignment

04/12/2018 ∙ by Jeroen Manders, et al. ∙ Radboud Universiteit 0

Unsupervised domain adaptation tries to adapt a classifier trained on a labeled source domain to a related but unlabeled target domain. Methods based on adversarial learning try to learn a representation that is at the same time discriminative for the labels yet incapable of discriminating the domains. We propose a very simple and efficient method based on this approach which only aligns predicted class probabilities across domains. Experiments show that this strikingly simple adversarial domain adaptation method is robust to overfitting and achieves state-of-the-art results on datasets for image classification.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In unsupervised domain adaptation, labelled examples from a source domain and unlabelled examples from a related target domain are given. The goal is to infer the labels of target examples. A straightforward approach for tackling this problem is to label target examples by just applying a deep neural network pre-trained on data from a related domain. This approach has been shown to work rather well in practice. The reason is that deep networks learn feature representations which reduce domain discrepancy, although they do not fully eliminate it [Yosinski et al., 2014]. If we assume that features from pre-trained deep neural networks indeed provide a good representation for both source and target data, then in order to perform domain adaptation one needs to align only source and target label predictions in such representation, so only at class label level. This paper investigates this novel setting. We introduce a label alignment method to force the uncertainty in the predicted labels on the target domain to be indistinguishable from that on the source domain. The method is based on the adversarial learning approach: it performs domain alignment at class label level by learning a representation that is at the same time discriminative for the labelled source data yet incapable of discriminating the source and target at prediction uncertainty level. Specifically, the proposed method considers the class probabilities of a label classifier as input of a domain discriminator . The label classifier is trained to minimize the standard supervised loss on the source domain while the domain discriminator is trained to distinguish the class probabilities that the label classifier outputs on the source domain from those on the target domain (see Fig. 1).

A limitation of this method is that it works only under the assumption that source and target domains have the same class distribution. This is because the representation chosen to minimize the discrepancy between domains depends on the domains class distribution. If they are different, then the discrepancy between the domains in our representation will be large.

To overcome this limitation, we introduce a tailored loss function to enforce the domains to have equal class distributions during our training procedure: we incorporate class weights in our loss function, one for each instance. Class weights of source examples are fixed and those of the target examples are updated during optimization of our loss function.

Interestingly, training with this loss function leads to an overestimation of the target domain predictions, which results in an increased loss while stabilizing accuracy. This approach favors robustness, because the domain discriminator punishes overconfidence on the source domain, the latter being a sign of overfitting.

We call the resulting domain adaptation method LAD (Label Alignment with Deep features).

LAD uses deep features extracted from a pre-trained deep neural network, so no fine tuning of the feature extractor is needed. As such, LAD is more efficient than end-to-end deep learning adaptation methods.

Joint efforts from the machine learning research community resulted in the public availability of different pre-trained deep neural network architectures for visual classification. These pre-trained models provide a rich variety of feature extractors. Besides trying to align label predictions, the other contribution of this paper is an extensive analysis of the same method when changing the ‘feature extractor’ part by exploring a wide set of existing pre-trained deep architectures. This choice allows us to understand the advantage of the approach when dealing with different features.

An extensive experimental analysis shows that LAD achieves consistent improvement in accuracy across different pre-trained networks. Overall the method achieves state of the art results on the standard Office-31 and ImageCLEF-DA datasets, with a neat improvement over competing baselines on harder transfer tasks.

The main contributions of this paper can be summarized as follows: 1) a specific setting for the domain adaptation problem; 2) a tailored method for performing domain adaptation in this setting; 3) extensive analysis of the proposed method when changing its ‘feature extraction’ part.

2 Related Work

There is a vast literature on domain adaptation (see for instance the recent surveys [Weiss et al., 2016, Csurka, 2017]).

Besides the optimization towards better source domain class predictions, domain adaptation methods try to achieve domain invariance [Ben-David et al., 2010]. To achieve domain invariance there are roughly two popular approaches: minimizing some measure of domain discrepancy and using an adversarial domain discriminator. Below we summarize several recent methods based on deep neural networks.

Deep-CORAL [Sun and Saenko, 2016] aligns correlations of layer activations in deep neural networks. Deep Transfer Network (DTN) [Zhang et al., 2015] employs a deep neural network to model and match both the domains marginal and conditional distributions. A popular measure used to minimize domain discrepancy is Maximum Mean Discrepancy (MMD) [Gretton et al., 2009]. This measure is used in several recent domain adaptation methods. For instance, DAN [Long et al., 2015] and RTN [Long et al., 2016b] are end-to-end deep adaptation methods which use the sum of multiple MMD by matching the feature distributions of multiple layers across domains.

In adversarial learning a generator tries to fool a discriminator so that it cannot distinguish between generated and real examples[Goodfellow et al., 2014]. Current work on adversarial domain adaptation tries to trick a domain discriminator so that it no longer can distinguish between features originating from either the source or target domain, which results in domain invariant features to be trained.

For instance, ReverseGrad [Ganin and Lempitsky, 2015, Ganin et al., 2016] enforce the domains to be indistinguishable by reversing the gradients of the loss of the domain classifier.

Recently [Tzeng et al., 2017]

introduced a unifying framework for adversarial transfer learning, and proposed a new instance of this framework, called Adversarial Discriminative Domain Adaptation (ADDA), which combines discriminative modeling, untied weight sharing, and a generative adversarial network loss.

LAD’s alignment at label level is based on adversarial learning. As such, it shares the theoretical motivation of adversarial learning method for domain adaptation, as explained e.g. in [Ganin et al., 2016].

Note that the idea of matching the classifier layer has been used in end-to-end domain adaptation methods based on deep learning, for instance DAN and RTN, where both the feature layer and the classifier layer are aligned simultaneously using either MMD or adversarial domain discriminator. The main difference between LAD and these works is the underlying scenario: LAD works under the assumption that domain representations are already reasonably matched through the use of a pre-trained deep neural network for feature extraction. Therefore LAD performs alignment only at class label level, by matching label predictions. While in LAD the domain discriminator takes as input (source and target) predictions of the label classifier, in all previous adversarial methods for domain adaptation the domain discriminator takes as input (source and target) features.

3 Class Prediction Uncertainty Alignment

We are given a set of source images and their labels drawn from a source domain distribution , and a set of target images without their labels, drawn from a target distribution . Our goal is to learn a classifier that correctly predicts the labels of .

3.1 Label Alignment with Deep Features

The proposed label alignment method shares the theoretical motivation of (adversarial learning) methods for domain adaptation: find a common representation that reduces the distance between source and target domain distributions [Ben-David et al., 2007]. In LAD, as in other adversarial adaptation methods, domain distance is reduced using a neural network (the domain discriminator). In LAD the inputs of the network are class probabilities (computed by softmax), while in other adversarial methods, like Domain-Adversarial Neural Networks [Ganin et al., 2016], the input of the network are (deep) features.

The proposed method tries to align source and target domain at class label level using two models: a label classifier , and a domain discriminator . Both of these functions are parameterized by neural networks, and . For brevity we will omit the parameters and . The label classifier is trained to minimize the following standard supervised loss on the source domain :

(1)

where is the cross-entropy loss, and

denotes the vector of features generated using a pre-trained deep neural network.

The domain discriminator is trained to distinguish the uncertainty of the predictions that makes on the source domain from the uncertainty of the predictions on the target domain . This is again a standard supervised problem, predicting for the source and for the target, given the output of the label classifier. The loss is

(2)

We want the label classifier to ‘fool’ the domain discriminator. This can be achieved by training it to make the predictions on the two domains indistinguishable. That means that we maximize with respect to . The resulting optimization problem is therefore

(3)

3.1.1 Class Weighted Loss Function

Since here we adversarially train a domain invariant label classifier on the level of predictions, it is necessary that the label distribution of both domains is the same, otherwise predictions towards certain labels could be based on possible differences between label occurrences in both domains.

This would have a negative effect and prevent domain alignment. Formally, let be the fraction of source instances that have label , and similarly let be the (unknown) fraction of target instances with label under the true labeling. Then if and the label classifier makes perfect predictions, the domain discriminator is able to distinguish the two domains, based purely on the different conditional probabilities .

To overcome this problem we will switch to weighted loss functions,

(4)
(5)

Where the weights for the source domain are

(6)

For the target domain we do not know the true labels, so instead we use the predicted (pseudo)labels . So the weights are

(7)

where

With the weighted loss, the domain discriminator cannot use the difference in conditional probability of the class given the domain, since all classes occur with the same total weight in both domains.

3.2 Architecture

The overall LAD architecture used in our experiments is shown in Fig. 1. It consists of three parts: feature extractor, label classifier and domain discriminator.

3.2.1 Feature Extractor

We use a deep neural network pre-trained on the ImageNet dataset

[Russakovsky et al., 2015]. The last label prediction layer of a pre-trained network is omitted and features are extracted from the second to last layer, as this is presumably the layer with the lowest maximum mean discrepancy [Tzeng et al., 2014].

To generate robust features, we use a form of data augmentation, where different crops and flips of each image are passed through the network, and the features are averaged.

In particular, for each image, its features are calculated as follows. First, we resize the input image to the input size of the network plus pixels (for example, for ResNet50, which expects a input, we resize the image to pixels). From this resized image we take crops spaced of pixels apart. This is repeated for the horizontally flipped input image, resulting in different image crops. For each image, crop features are extracted from the pre-trained network. The final features of the input image are the averaged features of its crops.

3.2.2 Label Classifier

We consider a label classifier consisting of two dense (fully connected) layers of size

with ReLu activation and

dropout [Srivastava et al., 2014], followed by a dense layer with softmax activation for label predictions.

3.2.3 Domain Discriminator

The considered domain discriminator has the same structure as the label classifier, but without dropout layers. The domain discriminator is placed after the softmax layer of the label classifier, and behind a gradient reversal layer

[Ganin and Lempitsky, 2015, Ganin et al., 2016] which acts as an identity function on forward passes through the network, and reverses the gradient on backward passes. This ensures that we can use the gradient of to simultaneously maximize with respect to and minimize with respect to in our optimization problem (3).

3.3 Training

All training is done with minibatch Stochastic Gradient Descent (SGD) with Nesterov momentum. Both the label and domain loss is calculated with categorical cross-entropy. For training, we assume that we already extracted the features from a pre-trained deep neural network. The training of LAD is different from that of normal feedforward neural networks due to having two instead of one loss function. Each training step we draw a minibatch from both domains without replacement, append the domain identifier and, for the source domain, the class labels. With these inputs, training proceeds as follows: first, the source domain batch is used to train the label classifier, then the source and domain batches are concatenated and together are used to train the domain discriminator. We call one pass through the source domain an epoch.

The weights for the target domain are recomputed once per epoch. In the first epoch we set the weights to .

The complete training approach is displayed in Algorithm 1.

A technical concern of this training procedure is that as the labels for target domain data are unknown, in the proposed method, the weights for each target domain instance are estimated based on the predicted labels, and then updated iteratively epoch by epoch. However, there is no guarantee that the iterative procedure is able to find an optimal solution for

. That means the estimation of may become worse and worse. Nevertheless, under the assumption that features extracted from the pre-trained deep neural network are well transferable, and that source and target domains are related, this phenomenon should not happen. This is indeed the case in practice, as substantiated by results of our extensive empirical analysis.

Input

Pre-trained DNN

Dense 1024, ReLu, Dropout

Dense 1024, ReLu, Dropout

Dense, Softmax

Class label

Gradient Reversal Layer

Dense 1024, ReLu

Dense 1024, ReLu

Dense, Softmax

Domain label

Feature extractor

Label classifier

Domain discriminator
Figure 1: LAD Architecture
  Data: = labeled source data, = unlabeled target data
  Result: = predicted labels for target domain
   for each
   for each
  for  do
     while available batches in  do
        
        
        Perform a step of SGD on
        Perform a step of SGD on
     end while
     
      for each
  end for
Algorithm 1 LAD.

4 Experiments

We conduct extensive experiments on adaptation tasks from two real-life benchmark datasets. Datasets, experimental setup and methods used in our comparative analysis are described in detail below.

4.1 Datasets

We consider two benchmark datasets: Office-31 [Saenko et al., 2010] and imageCLEF-DA111http://imageclef.org/2014/adaptation.

The Office-31 dataset for visual domain adaptation consists of three domains with images in categories. The Amazon (A) domain with images consists of images taken from Amazon.com product pages. The DSLR (D) and Webcam (W) domains, with respectively and images, consist of images taken with either a digital SLR or web camera of the products in different environments. The images in each domain are unbalanced across the categories, therefore we will use our data balancing method. We report results on all possible domain combinations AD, AW, DA, DW, WA, and WD which is a good combination of difficult and easier domain adaptation tasks.

The imageCLEF-DA dataset is a benchmark dataset for ImageCLEF 2014 domain adaptation challenge and consists of common categories shared by three public datasets which are seen as different domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I), and Pascal VOC 2012 (P). This dataset is balanced, with images for each of the categories for a total of images per domain, making a good addition to the Office-31 dataset. Since for each transfer task the source is balanced, we omit our own balancing method when using this dataset. We report results on all domain combinations: CI, CP, IC, IP, PC, PI.

4.2 Experimental Setup

LAD is implemented on the Tensorflow

[Abadi et al., 2015]

framework via the Keras

[Chollet et al., 2015] interface. The network and training parameters are kept similar across all pre-trained architectures and domain adaptation tasks of both datasets. Specifically, we use stochastic gradient descent with a learning rate of and Nesterov momentum of , a batch size of . All of these parameter settings are considered default settings. In all our experiments we train each model for epochs. For each transfer task we run LAD

times and report the average label classification accuracy and standard deviation.

All algorithms are assessed in a fully transductive setup where all unlabeled target instances are used during training for predicting their labels. Labeled instances of the first domain are used as the source and unlabeled instances of the second domain as the target. We evaluate the accuracy on the target domain as the percentage of correctly labeled target instances.

In order to assess LAD’s transfer capability, we consider a baseline variant, obtained by omitting the domain discriminator from LAD, and trained on the source data (no adaptation). For instance, Baseline(DenseNet201) denotes the baseline variant with the pre-trained DenseNet201 network as the feature extractor. Network and training parameters are kept the same as those of LAD across all tasks, besides training for only epochs which is roughly chosen as optimal before overfitting becomes a problem.

In all experiments, we did not perform hyperparameter optimization, but just used default settings of Keras.

5 Results

In order to assess comparatively the performance of LAD across different pre-trained architectures, we conduct extensive experiments on the following pre-trained architectures publicly available at Keras: MobileNet [Howard et al., 2017], VGG16 [Simonyan and Zisserman, 2014], VGG19 [Simonyan and Zisserman, 2014], DenseNet [Huang et al., 2017], InceptionV3 [Szegedy et al., 2016], Xception [Chollet, 2016], and InceptionResNetV2 [Szegedy et al., 2017].

As shown in Table 1, on the Office-31, LAD(InceptionResNetV2) outperforms the other variants with an average accuracy of . Differences between architectures are very clear when looking at their baseline results where the difference between the worst and best architecture is around . The InceptionResNetV2 pre-trained features are so good and robust that without LAD they already outperform current state-of-the-art methods for domain adaptation based on the ResNet50 architecture.

On the ResNet50 architecture LAD improves on our baseline (no adaptation) on all tasks. The improvement is more evident on the harder tasks AD, DA, AW, and WA. In particular, on AW more than 13% improvement is achieved (from with no adaptation to with adaptation).

The increase in target accuracy is larger when using less powerful architectures. For example, with MobileNet, on the harder adaptation tasks DA and WA, about increase in target accuracy is achieved (from with no adaptation to with adaptation for DA, and from with no adaptation to with adaptation for WA).

As shown in Table 2, on the ImageClef-DA adaptation tasks, the best average accuracy is obtained by LAD with the Xception architecture, with an average accuracy of . Notably, on the CI adaptation task, using InceptionResNetV2 LAD gains about target accuracy over the Baseline (from with no adaptation to with adaptation).

ImageCLEF-DA results of LAD based on ResNet50 show that the best improvement over the Baseline (no adaptation) is obtained on harder tasks. For instance, on the CI task (from with no adaptation to with adaptation).

LAD consistently performs well on features from pre-trained deep neural networks with different architectures.

Overall, results indicate that more recent pre-trained models achieve very good performance and that LAD consistently improves on the baselines. These results provide further experimental evidence that deep networks learn feature representations which reduce domain discrepancy, but do not fully eliminate it, even for architectures achieving excellent performance, like InceptionResNetV2.

Method A D A W D A D W W A W D avg
Baseline(MobileNet) 74.51.5 73.50.6 57.20.6 97.80.2 56.50.6 99.40.2 76.5%
LAD(MobileNet) 82.21.8 89.32.5 72.10.5 98.90.1 71.33.2 99.80.1 85.6%
Baseline(VGG16) 76.51.1 73.71.2 61.90.6 96.60.3 60.40.6 99.70.1 78.1%
LAD(VGG16) 85.32.0 87.91.5 69.90.8 97.30.2 70.10.6 99.70.1 85.0%
Baseline(VGG19) 76.10.8 72.91.1 63.40.6 97.40.4 62.91.0 99.80.1 78.8%
LAD(VGG19) 83.91.8 87.70.7 71.00.8 98.20.3 71.50.8 99.90.1 85.4%
Baseline(ResNet50) 81.00.6 76.50.9 64.80.8 97.50.2 63.61.0 99.70.2 80.5%
LAD(ResNet50) 90.61.2 90.00.7 74.00.6 98.00.1 75.31.4 99.80.2 87.9%
Baseline(DenseNet201) 85.30.8 82.31.2 68.50.6 98.00.2 67.70.5 99.90.1 83.6%
LAD(DenseNet201) 93.10.8 94.70.9 77.20.8 98.60.1 77.70.7 99.90.1 90.2%
Baseline(InceptionV3) 85.90.8 82.40.7 72.80.4 97.50.4 72.80.3 99.00.3 85.1%
LAD(InceptionV3) 91.20.7 88.60.5 76.90.5 98.30.2 76.90.8 99.30.2 88.5%
Baseline(Xception) 85.20.7 83.90.7 72.10.4 97.00.2 71.90.5 99.70.1 85.0%
LAD(Xception) 91.01.5 92.90.5 78.60.3 98.10.1 78.10.8 100.00.1 89.8%
Baseline(InceptionResNetV2) 90.20.7 89.30.6 74.90.5 97.30.2 75.50.3 99.60.2 87.8%
LAD(InceptionResNetV2) 93.70.8 95.30.3 78.80.5 98.30.1 78.50.5 99.60.1 90.7%
Table 1: Baseline and LAD average accuracy (with standard deviations) over runs on the Office-31 dataset for different network architectures.
Method C I C P I C I P P C P I avg
Baseline(MobileNet) 77.90.3 65.20.8 89.80.7 74.60.4 91.20.8 84.90.8 80.6%
LAD(MobileNet) 87.90.7 73.90.7 94.60.4 75.20.5 94.00.3 88.30.7 85.6%
Baseline(VGG16) 83.20.7 70.70.5 91.90.5 76.50.5 91.50.6 86.00.8 83.3%
LAD(VGG16) 89.60.5 76.70.8 94.30.3 76.20.8 94.40.4 88.80.9 86.7%
Baseline(VGG19) 84.70.7 70.90.4 92.00.3 76.60.4 91.60.5 85.80.7 83.6%
LAD(VGG19) 89.00.7 74.50.5 94.80.3 77.30.6 94.30.3 90.21.0 86.7%
Baseline(ResNet50) 80.91.3 68.01.0 92.20.5 76.10.4 91.80.5 88.40.8 82.9%
LAD(ResNet50) 88.51.0 74.01.0 95.20.4 76.80.7 94.10.2 90.60.6 86.5%
Baseline(DenseNet201) 87.70.7 71.60.6 93.60.4 78.30.4 94.30.5 90.80.8 86.1%
LAD(DenseNet201) 93.00.4 78.31.0 97.50.3 79.10.3 95.70.4 93.20.4 89.5%
Baseline(InceptionV3) 83.11.2 66.10.8 94.30.5 77.80.5 93.90.4 90.80.9 84.3%
LAD(InceptionV3) 92.80.3 75.90.9 95.90.3 78.30.5 95.80.3 94.20.5 88.8%
Baseline(Xception) 85.20.8 69.90.5 94.70.5 79.30.5 92.81.1 90.80.6 85.5%
LAD(Xception) 94.20.4 77.71.1 96.80.4 80.10.5 96.60.3 92.60.6 89.7%
Baseline(InceptionResNetV2) 80.30.9 67.80.9 90.31.9 79.30.5 88.40.9 89.70.8 82.6%
LAD(InceptionResNetV2) 91.50.7 75.90.9 97.20.3 80.60.5 95.00.3 92.31.2 88.7%
Table 2: Baseline and LAD average accuracy (with standard deviations) over runs on the ImageCLEF-DA dataset for different network architectures.

6 Comparison With End-to-End Deep Learning Methods

To assess how results of LAD compare with the state-of-the-art, we report published results of the following end-to-end deep learning methods for domain adaptation that fine-tune a ResNet50 model pre-trained on ImageNet: Deep Domain Confusion (DDC) [Tzeng et al., 2014], Deep Adaptation Network (DAN) [Long et al., 2015], Residual Transfer Network (RTN) [Long et al., 2016b], Adversarial Discriminative Domain Adaptation (ADDA) [Tzeng et al., 2017], Reverse Gradient (RevGrad) [Ganin and Lempitsky, 2015].

Although all experiments were conducted under the same transductive setup, results should be interpreted with care. There are various differences between the considered algorithms. For instance, end-to-end training of a pre-trained deep architecture versus using the pre-trained architecture to extract features, or hyper-parameters tuning vs using default settings.

Overall, results indicate state of the art performance of LAD, comparable or better than that of end-to-end deep adaptation methods.

Method A D A W D A D W W A W D avg
DDC [Tzeng et al., 2014] 77.50.3 75.80.2 67.40.4 95.00.2 64.00.5 98.20.1 79.7%
DAN [Long et al., 2015] 78.40.2 83.80.4 66.70. 96.80.2 62.70.2 99.50.1 81.3%
RTN [Long et al., 2016b] 71.00.2 73.30.2 50.50.3 96.80.2 51.00.1 99.60.1 73.7%
RevGrad [Ganin and Lempitsky, 2015] 72.30.3 73.00.5 52.40.4 96.40.3 50.40.5 99.20.3 74.1%
ADDA [Tzeng et al., 2017] 77.80.3 86.20.5 69.50.4 96.20.3 68.90.5 98.40.3 82.9%
LAD 90.61.2 89.90.7 74.00.6 98.00.1 75.31.4 99.80.2 87.9%
Table 3: Average accuracy (with standard deviations) on adaptation tasks from the Office-31 dataset. All methods considered use a ResNet50 model.
Method I P P I I C C I C P P C avg
DAN [Long et al., 2015] 75.00.4 86.20.2 93.30.2 84.10.4 69.80.4 91.30.4 83.3%
RTN [Long et al., 2016a] 75.60.3 86.80.1 95.30.1 86.90.3 72.70.3 92.20.4 84.9%
RevGrad [Ganin and Lempitsky, 2015] 75.00.6 86.00.3 96.20.4 87.00.5 74.30.5 91.50.6 85.0%
LAD 76.80.7 90.60.6 95.20.3 88.51.0 74.01.0 94.10.2 86.5%
Table 4: Average accuracy (with standard deviations) for various methods on the ImageCLEF-DA dataset, obtained with the ResNet50 architecture.

7 Discussion

7.1 Effectiveness with Shallower Pre-Trained Deep Models

LAD depends on the quality of pseudo labels for computing weights of target instances and for the model construction. A natural concern is: What if target classification accuracy is too low? Will the alignment of classifier predictions still be effective? To investigate this issue, we consider the shallower network AlexNet as feature extractor for the Office-31 dataset. Since this model is not available in Keras, we used deep features from the 7th layer provided by [Tommasi and Tuytelaars, 2014]. Table 5 shows results. When using the less deep AlexNet architecture LAD still improves on our baseline (no adaptation) on all tasks. Also in this case, adaptation proves to be effective on harder tasks. For instance on WA our baseline obtains accuracy, while with adaptation accuracy is achieved.

Method A D A W D A D W W A W D avg
Baseline(DeCAF-fc7) 63.631.07 57.261.17 47.530.75 94.300.66 46.150.61 98.070.42 67.82%
LAD(DeCAF-fc7) 70.781.25 65.770.56 53.470.96 96.780.39 54.821.18 98.940.32 73.43%
Table 5: Average accuracy (with standard deviations) on adaptation tasks from the Office-31 dataset. LAD uses features extracted from the 7th layer of the pre-trained AlexNet model.

7.2 Robustness to the Choice of the Number of Epochs

Looking at the learning curves in Fig. 2, we see that the target domain classification loss reaches a minimum after 50 to 150 epochs, after which it starts to increase. However, the accuracy continues to increase, and there is no sign of overfitting. Ganin & Lempitsky [Ganin and Lempitsky, 2015] also report this finding for their method, but it seems this phenomenon is even more pronounced when aligning domains on the level of predictions instead of features. Indeed, aligning domains on predictions needs to entail the same level of certainty of predictions for both source and target domains, which leads to an overestimation of the target domain prediction certainty, to match the certainty on the source domain. This overestimation in time results in an increased loss while stabilizing accuracy: a higher certainty of target predictions makes it harder to switch predictions to another class label.

Furthermore, while the certainty on the source domain leads to overconfidence of the label classifier on the target domain, the uncertainty about the target domain labels has a regularizing effect on the source domain. The label classifier cannot become overconfident on the source domain, because then the source domain predictions would not look like the initially uncertain target domain predictions.

The stability of the target domain, together with the regularizing effect of the label uncertainty on the source domain makes LAD robust to the choice of the number of epochs. The algorithm therefore does not require early stopping.

epochs

accuracy (%)
(a) Office-31 accuracy.

epochs

class loss
(b) Office-31 loss.

epochs

accuracy (%)
(c) ImageCLEF-DA accuracy.

epochs

class loss
(d) ImageCLEF-DA loss.
Figure 2: Target domain classification accuracy and classification loss when training for up to epochs. Made with the ResNet50 architecture.
(a) Baseline source.
(b) Baseline target.
(c) LAD source.
(d) LAD target.
Figure 3: t-SNE feature visualization of Baseline and LAD features on the AW task from the Office-31 dataset. ResNet50 is the used pre-trained architecture. Features visualized from the second dense layer of our architecture shown in Fig. 1.

7.3 Class Weights Importance with Unbalanced Class Distributions

We have also investigated the importance of the class weights introduced in our loss function (see Section 3.1.1), by training the model without using weights.

On the Office-31 dataset, without class weights LAD with ResNet50 features achieves an average accuracy of 80.3%, compared to 87.9% when the loss with class weights is used. The Office-31 dataset has unbalanced class distributions. In this case the loss with weights prevents the use of this information, and LAD obtains better performance.

On the other hand, on ImageCLEF-DA, not using class weights gives an average accuracy of 87.8%, compared to 86.5% with weights. This happens because this dataset is fully class balanced. In that case, performance does not drop when no weights are used in the loss, because class distributions are already fully class balanced.

In general, we can make no assumptions about the target domain being balanced. In that case, we should assume that the data is not class balanced, and use the weighted loss functions, as is done in LAD.

7.4 Running Time

LAD does not perform fine-tuning of large pre-trained architecture weights and therefore is relatively fast to train. On average over all different transfer tasks a single epoch as described in algorithm 1 takes seconds for Office-31 and seconds for the imageCLEF-DA dataset when trained on a single Nvidia GeForce GTX 1070.

7.5 Visualization of Deep Features

To get more insight into the feature representation learned with LAD, we compare t-SNE [Maaten and Hinton, 2008] feature visualizations of LAD features with those of Baseline on the ResNet50 architecture. For better comparability, we visualize features on the difficult AW adaptation task. Visualized features are from the second dense layer (see Fig. 1). Fig. 3 indicates that LAD features are better and more domain invariant than those of the baseline, since the classes of the Office-31 dataset are better distinguishable and the features from both domains are better mapped on each other.

8 Conclusion

In this paper we introduced domain alignment at prediction uncertainty level, to be used with features extracted from pre-trained deep neural networks. We demonstrated effectiveness, efficiency, and robustness through extensive experiments with diverse pre-trained architectures and unsupervised domain adaptation tasks for image classification.

In our experimental analysis, we did not perform hyperparameter optimization, but just used default settings of Keras. It is interesting to investigate whether LAD performance could be further improved by applying procedures for tuning hyperparameters in a transfer learning setting, like [Zhong et al., 2010].

We have shown that training with our tailored loss function favors robustness, because the domain discriminator punishes overconfidence on the source domain, the latter being a sign of overfitting. It will be interesting to investigate whether a similar technique can also be used to prevent overfitting in other settings, such as supervised learning.

A limitation and intrinsic characteristic of LAD is that it does not directly align source and target features, it does alignment only through the uncertainty of predictions. This is a direct consequence of the domain adaptation scenario investigated here. As a consequence, LAD is sensitive to the choice of the features. Although the results of our experiments showed that in practice LAD works well across features from various pre-trained deep neural networks, its underlying assumption is the existence (and availability) of transferable (deep) features. On the other hand, domain alignment at the feature level as performed by previous domain adaptation methods, notably RevGrad, does not rely on this assumption and is therefore of more general applicability.

Nevertheless, our method for prediction uncertainty alignment can be applied to any feature representation that is good for source and target, so it is not limited to pre-trained deep neural networks as feature extractors. It will be interesting in future work to explore the utility of the method when used on the top of domain adaptation methods based on feature transformation, like [Fernando et al., 2013, Sun et al., 2016].

References

  • [Abadi et al., 2015] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.
  • [Ben-David et al., 2010] Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., and Vaughan, J. W. (2010). A theory of learning from different domains. Machine learning, 79(1):151–175.
  • [Ben-David et al., 2007] Ben-David, S., Blitzer, J., Crammer, K., and Pereira, F. (2007). Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems, pages 137–144.
  • [Chollet, 2016] Chollet, F. (2016). Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357.
  • [Chollet et al., 2015] Chollet, F. et al. (2015). Keras. https://github.com/fchollet/keras.
  • [Csurka, 2017] Csurka, G. (2017). Domain adaptation for visual applications: A comprehensive survey. arXiv preprint arXiv:1702.05374.
  • [Fernando et al., 2013] Fernando, B., Habrard, A., Sebban, M., and Tuytelaars, T. (2013). Unsupervised visual domain adaptation using subspace alignment. In

    Proceedings of the 2013 IEEE International Conference on Computer Vision

    , ICCV ’13, pages 2960–2967, Washington, DC, USA. IEEE Computer Society.
  • [Ganin and Lempitsky, 2015] Ganin, Y. and Lempitsky, V. (2015).

    Unsupervised domain adaptation by backpropagation.

    In International Conference on Machine Learning, pages 1180–1189.
  • [Ganin et al., 2016] Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., and Lempitsky, V. (2016). Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35.
  • [Goodfellow et al., 2014] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680.
  • [Gretton et al., 2009] Gretton, A., Smola, A. J., Huang, J., Schmittfull, M., Borgwardt, K. M., and Schölkopf, B. (2009). Covariate shift by kernel mean matching. In Joaquin Quinonero-Candela, Masashi Sugiyama, A. S. N. D. L., editor, Dataset Shift in Machine Learning, pages 131–160. MIT press.
  • [Howard et al., 2017] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • [Huang et al., 2017] Huang, G., Liu, Z., van der Maaten, L., and Weinberger, K. Q. (2017). Densely connected convolutional networks. In

    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 2261–2269.
  • [Long et al., 2015] Long, M., Cao, Y., Wang, J., and Jordan, M. (2015). Learning transferable features with deep adaptation networks. In International Conference on Machine Learning, pages 97–105.
  • [Long et al., 2016a] Long, M., Wang, J., and Jordan, M. I. (2016a). Deep transfer learning with joint adaptation networks. arXiv preprint arXiv:1605.06636.
  • [Long et al., 2016b] Long, M., Zhu, H., Wang, J., and Jordan, M. I. (2016b). Unsupervised domain adaptation with residual transfer networks. In Advances in Neural Information Processing Systems, pages 136–144.
  • [Maaten and Hinton, 2008] Maaten, L. v. d. and Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605.
  • [Russakovsky et al., 2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252.
  • [Saenko et al., 2010] Saenko, K., Kulis, B., Fritz, M., and Darrell, T. (2010). Adapting visual category models to new domains. Computer Vision–ECCV 2010, pages 213–226.
  • [Simonyan and Zisserman, 2014] Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • [Srivastava et al., 2014] Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929–1958.
  • [Sun et al., 2016] Sun, B., Feng, J., and Saenko, K. (2016). Return of frustratingly easy domain adaptation. In

    Thirtieth AAAI Conference on Artificial Intelligence

    .
  • [Sun and Saenko, 2016] Sun, B. and Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In Computer Vision–ECCV 2016 Workshops, pages 443–450. Springer.
  • [Szegedy et al., 2017] Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A. (2017).

    Inception-v4, inception-resnet and the impact of residual connections on learning.

    In AAAI, pages 4278–4284.
  • [Szegedy et al., 2016] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826.
  • [Tommasi and Tuytelaars, 2014] Tommasi, T. and Tuytelaars, T. (2014). A testbed for cross-dataset analysis. In European Conference on Computer Vision, pages 18–31. Springer.
  • [Tzeng et al., 2017] Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. (2017). Adversarial discriminative domain adaptation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2962–2971.
  • [Tzeng et al., 2014] Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., and Darrell, T. (2014). Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474.
  • [Weiss et al., 2016] Weiss, K., Khoshgoftaar, T. M., and Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3(1):9.
  • [Yosinski et al., 2014] Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320–3328.
  • [Zhang et al., 2015] Zhang, X., Yu, F. X., Chang, S.-F., and Wang, S. (2015). Deep transfer network: Unsupervised domain adaptation. arXiv preprint arXiv:1503.00591.
  • [Zhong et al., 2010] Zhong, E., Fan, W., Yang, Q., Verscheure, O., and Ren, J. (2010). Cross validation framework to choose amongst models and datasets for transfer learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 547–562. Springer.