DeepAI
Log In Sign Up

Learning a Domain-Invariant Embedding for Unsupervised Domain Adaptation Using Class-Conditioned Distribution Alignment

We address the problem of unsupervised domain adaptation (UDA) by learning a cross-domain agnostic embedding space, where the distance between the probability distributions of the two source and target visual domains is minimized. We use the output space of a shared cross-domain deep encoder to model the embedding space anduse the Sliced-Wasserstein Distance (SWD) to measure and minimize the distance between the embedded distributions of two source and target domains to enforce the embedding to be domain-agnostic.Additionally, we use the source domain labeled data to train a deep classifier from the embedding space to the label space to enforce the embedding space to be discriminative.As a result of this training scheme, we provide an effective solution to train the deep classification network on the source domain such that it will generalize well on the target domain, where only unlabeled training data is accessible. To mitigate the challenge of class matching, we also align corresponding classes in the embedding space by using high confidence pseudo-labels for the target domain, i.e. assigning the class for which the source classifier has a high prediction probability. We provide theoretical justification as well as experimental results on UDA benchmark tasks to demonstrate that our method is effective and leads to state-of-the-art performance.

READ FULL TEXT VIEW PDF

page 3

page 6

09/29/2022

Increasing Model Generalizability for Unsupervised Domain Adaptation

A dominant approach for addressing unsupervised domain adaptation is to ...
01/29/2023

Unsupervised Domain Adaptation for Graph-Structured Data Using Class-Conditional Distribution Alignment

Adopting deep learning models for graph-structured data is challenging d...
07/04/2021

Domain Adaptation for Sentiment Analysis Using Increased Intraclass Separation

Sentiment analysis is a costly yet necessary task for enterprises to stu...
04/25/2019

Transferrable Prototypical Networks for Unsupervised Domain Adaptation

In this paper, we introduce a new idea for unsupervised domain adaptatio...
08/19/2020

Unsupervised Cross-domain Image Classification by Distance Metric Guided Feature Alignment

Learning deep neural networks that are generalizable across different do...
08/02/2017

Associative Domain Adaptation

We propose associative domain adaptation, a novel technique for end-to-e...
09/09/2021

Towards Robust Cross-domain Image Understanding with Unsupervised Noise Removal

Deep learning models usually require a large amount of labeled data to a...

I Introduction

Deep learning classification algorithms have surpassed performance of humans for a wide range of computer vision applications. However, this achievement is conditioned on availability of high-quality labeled datasets to supervise training deep neural networks. Unfortunately, preparing huge labeled datasets is not feasible for many situations as data labeling and annotation can be expensive [30]. Domain adaptation [12] is a paradigm to address the problem of labeled data scarcity in computer vision, where the goal is to improve learning speed and model generalization as well as to avoid expensive redundant model retraining. The major idea is to overcome labeled data scarcity in a target domain by transferring knowledge from a related auxiliary source domain, where labeled data is easy and cheap to obtain.

A common technique in domain adaptation literature is to embed data from the two source and target visual domains in an intermediate embedding space such that common cross-domain discriminative relations are captured in the embedding space. For example, if the data from source and target domains have similar class-conditioned probability distributions in the embedding space, then a classifier trained solely using labeled data from the source domain will generalize well on data points that are drawn from the target domain distribution 

[29, 31].

In this paper, we propose a novel unsupervised adaptation (UDA) algorithm following the above explained procedure. Our approach is a simpler, yet effective, alternative for adversarial learning techniques that have been more dominant to address probability matching indirectly for UDA [42, 44, 24]. Our contribution is two folds. First, we train the shared encoder by minimizing the Sliced-Wasserstein Distance (SWD) [27] between the source and the target distributions in the embedding space. We also train a classifier network simultaneously using the source domain labeled data. A major benefit of SWD over alternative probability metrics is that it can be computed efficiently. Additionally, SWD is known to be suitable for gradient-based optimization which is essential for deep learning [29]. Our second contribution is to circumvent the class matching challenge [35] by minimizing SWD between conditional distributions in sequential iterations for better performance compared to prior UDA methods that match probabilities explicitly. At each iteration, we assign pseudo-labels only to the target domain data that the classifier predicts the assigned class label with high probability and use this portion of target data to minimize the SWD between conditional distributions. As more learning iterations are performed, the number of target data points with correct pseudo-labels grows and progressively enforces distributions to align class-conditionally. We provide theoretical analysis and experimental results on benchmark problems, including ablation and sensitivity studies, to demonstrate that our method is effective.

Ii Background and Related Work

There are two major approaches in the literature to address domain adaption. The approach for a group of methods is based on preprocessing the target domain data points. The target data is mapped from the target domain to the source domain such that the target data structure is preserved in the source [34]. Another common approach is to map data from both domains to a latent domain invariant space [7]. Early methods within the second approach learn a linear subspace as the invariant space  [15, 13] where the target domain data points distribute similar to the source domain data points. A linear subspace is not suitable for capturing complex distributions. For this reason, recently deep neural networks have been used to model the intermediate space as the output of the network. The network is trained such that the source and the target domain distributions in its output possess minimal discrepancy. Training procedure can be done both by adversarial learning [14] or directly minimizing the distance between the two distributions [4].

Several important UDA methods use adversarial learning. Ganin et al. [10] pioneered and developed an effective method to match two distributions indirectly by using adversarial learning. Liu et al. [22] and Tzeng et al. [42] use the Generative Adversarial Networks (GAN) structure [14] to tackle domain adaptation. The idea is to train two competing (i.e., adversarial) deep neural networks to match the source and the target distributions. A generator network maps data points from both domains to the domain-invariant space and a binary discriminator network is trained to classify the data points, with each domain considered as a class, based on the representations of the target and the source data points. The generator network is trained such that eventually the discriminator cannot distinguish between the two domains, i.e. classification rate becomes .

A second group of domain adaptation algorithms match the distributions directly in the embedding by using a shared cross-domain mapping such that the distance between the two distributions is minimized with respect to a distance metric [4]. Early methods use simple metrics such the Maximum Mean Discrepancy (MMD) for this purpose [16]. MMD measures the distances between the distance between distributions simply as the distance between the mean of embedding features. In contrast, more recent techniques that use a shared deep encoder, employ the Wasserstein metric [43] to address UDA [4, 6]. Wasserstein metric is shown to be a more accurate probability metric and can be minimized effectively by deep learning first-order optimization techniques. A major benefit of matching distributions directly is existence of theoretical guarantees. In particular, Redko et al. [29] provided theoretical guarantees for using a Wasserstein metric to address domain adaptation. Additionally, adversarial learning often requires deliberate architecture engineering, optimization initialization, and selection of hyper-parameters to be stable [33]. In some cases, adversarial learning also suffers from a phenomenon known as mode collapse [23]. That is, if the data distribution is a multi-modal distribution, which is the case for most classification problems, the generator network might not generate samples from some modes of the distribution. These challenges are easier to address when the distributions are matched directly.

As Wasserstein distance is finding more applications in deep learning, efficient computation of Wasserstein distance has become an active area of research. The reason is that Wasserstein distance is defined in form of a linear programming optimization and solving this optimization problem is computationally expensive for high-dimensional data. Although computationally efficient variations and approximations of the Wasserstein distance have been recently proposed

[5, 41, 26]

, these variations still require an additional optimization in each iteration of the stochastic gradient descent (SGD) steps to match distributions. Courty et al. 

[4] used a regularized version of the optimal transport for domain adaptation. Seguy et al. [37]

used a dual stochastic gradient algorithm for solving the regularized optimal transport problem. Alternatively, we propose to address the above challenges using Sliced Wasserstein Distance (SWD). Definition of SWD is motivated by the fact that in contrast to higher dimensions, the Wasserstein distance for one-dimensional distributions has a closed form solution which can be computed efficiently. This fact is used to approximate Wasserstein distance by SWD, which is a computationally efficient approximation and has recently drawn interest from the machine learning and computer vision communities 

[27, 1, 3, 8, 40].

Iii Problem Formulation

Consider a source domain, , with labeled samples, i.e. labeled images, where denotes the samples and contains the corresponding labels. Note that label identifies the membership of to one or multiple of the classes (e.g. digits for hand-written digit recognition). We assume that the source samples are drawn i.i.d. from the source joint probability distribution, i.e. . We denote the source marginal distribution over with . Additionally, we have a related target domain (e.g. machine-typed digit recognition) with unlabeled data points . Following existing UDA methods, we assume that the same type of labels in the source domain holds for the target domain. The target samples are drawn from the target marginal distribution . We also know that despite similarity between these domains, distribution discrepancy exists between these two domains, i.e. . Our goal is to classify the unlabeled target data points through knowledge transfer from the source domain. Learning a good classifier for the source data points is a straight forward problem as given a large enough number of source samples, , a parametric function , e.g., a deep neural network with concatenated learnable parameters

, can be trained to map samples to their corresponding labels using standard supervised learning solely in the source domain. The training is conducted via minimizing the empirical risk,

, with respect to a proper loss function,

(e.g., cross entropy loss). The learned classifier generalizes well on testing data points if they are drawn from the training data point’s distributions. Only then, the empirical risk is a suitable surrogate for the real risk function, . Hence the naive approach of using on the target domain might not be effective as given the discrepancy between the source and target distributions, might not generalize well on the target domain. Therefore, there is a need for adapting the training procedure of by incorporating unlabeled target data points such that the learned knowledge from the source domain could be transferred and used for classification in the target domain using only the unlabeled samples.

The main challenge is to circumvent the problem of discrepancy between the source and the target domain distributions. To that end, the mapping can be decomposed into a feature extractor and a classifier , such that , where and are the corresponding learnable parameters, i.e. . The core idea is to learn the feature extractor function,

, for both domains such that the domain specific distribution of the extracted features to be similar to one another. The feature extracting function

, maps the data points from both domains to an intermediate embedding space (i.e., feature space) and the classifier maps the data points representations in the embedding space to the label set. Note that, as a deterministic function, the feature extractor function can change the distribution of the data in the embedding. Therefore, if is learned such that the discrepancy between the source and target distributions is minimized in the embedding space, i.e., discrepancy between and (i.e., the embedding is domain agnostic), then the classifier would generalize well on the target domain and could be used to label the target domain data points. This is the core idea behind various prior domain adaptation approaches in the literature [25, 11].

Iv Proposed Method

Fig. 1: Architecture of the proposed unsupervised domain adaptation framework.

We consider the case where the feature extractor, , is a deep convolutional encoder with weights and the classifier is a shallow fully connected neural network with weights

. The last layer of the classifier network is a softmax layer that assigns a membership probability distribution to any given data point. It is often the case that the labels of data points are assigned according to the class with maximum predicted probability. In short, the encoder network is learned to mix both domains such that the extracted features in the embedding are: 1) domain agnostic in terms of data distributions, and 2) discriminative for the source domain to make learning

feasible. Figure 1 demonstrates system level presentation of our framework. Following this framework, the UDA reduces to solving the following optimization problem to solve for and :

(1)

where is a discrepancy measure between the probabilities and is a trade-off parameter. The first term in Eq. (1) is empirical risk for classifying the source labeled data points from the embedding space and the second term is the cross-domain probability matching loss. The encoder’s learnable parameters are learned using data points from both domains and the classifier parameters are simultaneously learned using the source domain labeled data.

A major remaining question is to select a proper metric. First, note that the actual distributions and are unknown and we can rely only on observed samples from these distributions. Therefore, a sensible discrepancy measure, , should be able to measure the dissimilarity between these distributions only based on the drawn samples. In this work, we use the SWD [28] as it is computationally efficient to compute SWD from drawn samples from the corresponding distributions. More importantly, the SWD is a good approximation for the optimal transport [2] which has gained interest in deep learning community as it is an effective distribution metric and its gradient is non-vanishing.

The idea behind the SWD is to project two -dimensional probability distributions into their marginal one-dimensional distributions, i.e., slicing the high-dimensional distributions, and to approximate the Wasserstein distance by integrating the Wasserstein distances between the resulting marginal probability distributions over all possible one-dimensional subspaces. For the distribution , a one-dimensional slice of the distribution is defined as:

(2)

where denotes the Kronecker delta function,

denotes the vector dot product,

is the -dimensional unit sphere and is the projection direction. In other words, is a marginal distribution of obtained from integrating

over the hyperplanes orthogonal to

. The SWD then can be computed by integrating the Wasserstein distance between sliced distributions over all :

(3)

where denotes the Wasserstein distance. The main advantage of using the SWD is that, unlike the Wasserstein distance, calculation of the SWD does not require a numerically expensive optimization. This is due to the fact that the Wasserstein distance between two one-dimensional probability distributions has a closed form solution and is equal to the

-distance between the inverse of their cumulative distribution functions Since only samples from distributions are available, the one-dimensional Wasserstein distance can be approximated as the

-distance between the sorted samples [32]. The integral in Eq. (3) is approximated using a Monte Carlo style numerical integration. Doing so, the SWD between -dimensional samples and can be approximated as the following sum:

(4)

where is uniformly drawn random sample from the unit -dimensional ball , and and are the sorted indices of for source and target domains, respectively. Note that for a fixed dimension , Monte Carlo approximation error is proportional to . We utilize the SWD as the discrepancy measure between the probability distributions to match them in the embedding space. Next, we discuss a major deficiency in Eq. (1) and our remedy to tackle it. We utilize the SWD as the discrepancy measure between the probability densities, and .

Iv-a Class-conditional Alignment of Distributions

A main shortcoming of Eq. (1) is that minimizing the discrepancy between and does not guarantee semantic consistency between the two domains. To clarify this point, consider the source and target domains to be images corresponding to printed digits and handwritten digits. While the feature distributions in the embedding space could have low discrepancy, the classes might not be correctly aligned in this space, e.g. digits from a class in the target domain could be matched to a wrong class of the source domain or, even digits from multiple classes in the target domain could be matched to the cluster of a single digit of the source domain. In such cases, the source classifier will not generalize well on the target domain. In other words, the shared embedding space, , might not be a semantically meaningful space for the target domain if we solely minimize SWD between and . To solve this challenge, the encoder function should be learned such that the class-conditioned probabilities of both domains in the embedding space are similar, i.e. , where denotes a particular class. Given this, we can mitigate the class matching problem by using an adapted version of Eq. (1) as:

(5)

where discrepancy between distributions is minimized conditioned on classes, to enforce semantic alignment in the embedding space. Solving Eq. (5), however, is not tractable as the labels for the target domain are not available and the conditional distribution, , is not known.

1:Input: data ,
2:Pre-training:
3:    
4:for   do
5:     
6:     for   do
7:          Update encoder parameters using pseudo-labels:
8:              
9:          Update entire model:
10:              
11:     end for
12:end for
Algorithm 1

To tackle the above issue, we compute a surrogate of the objective in Eq. (5). Our idea is to approximate by generating pseudo-labels for the target data points. The pseudo-labels are obtained from the source classifier prediction, but only for the portion of target data points that the the source classifier provides confident prediction. More specifically, we solve Eq. (5) in incremental gradient descent iterations. In particular, we first initialize the classifier network by training it on the source data. We then alternate between optimizing the classification loss for the source data and SWD loss term at each iteration. At each iteration, we pass the target domain data points into the classifier learned on the source data and analyze the label probability distribution on the softmax layer of the classifier. We choose a threshold and assign pseudo-labels only to those target data points that the classifier predicts the pseudo-labels with high confidence, i.e. . Since the source and the target domains are related, it is sensible that the source classifier can classify a subset of target data points correctly and with high confidence. We use these data points to approximate in Eq. (5) and update the encoder parameters, , accordingly. In our empirical experiments, we have observed that because the domains are related, as more optimization iterations are performed, the number of data points with confident pseudo-labels increases and our approximation for Eq. (5) improves and becomes more stable, enforcing the source and the target distributions to align class conditionally in the embedding space. As a side benefit, since we math the distributions class-conditionally, a problem similar to mode collapse is unlikely to occur. Figure 2 visualizes this process using real data. Our proposed framework, named Domain Adaptation with Conditional Alignment of Distributions (DACAD) is summarized in Algorithm 1.

V Theoretical Analysis

In this section, we employ existing theoretical results on suitability of optimal transport for domain adaptation [29] within our framework and prove why our algorithm can train models that generalize well on the target domain.

First note that, the hypothesis class within our framework is the set of all models that are parameterized by . For any given model in this hypothesis class, we denote the observed risk on the source domain by . Analogously, denotes the observed risk on the target domain in the UDA setting. Also, let denote the empirical source distribution, obtained from the observed training samples. We can define the empirical source distribution similarly. Moreover, let denote the ideal model that minimizes the combined source and target risks , i.e. . In the presence of enough labeled target domain data, this is the best joint model that can be learned. We rely on the following theorem [4].

Theorem 1 [29]: Under the assumptions described above for UDA, then for any and , there exists a constant number depending on such that for any and with probability at least for all , the following holds:

(6)

For simplicity, Theorem 1 originally is proven in the binary classification setting and consider 0-1 binary loss function (thresholded binary softmax). We also limit our analysis to this setting but note that these restrictions can be loosen to be broader.The initial consequence of the above theorem might seem that minimizing the Wasserstein distance between the source and the target distributions can improve generalization error on the target domain because it will make the inequality in Eq. (6) tighter. But it is crucial to note that Wasserstein distance cannot be minimized independently from minimizing the source risk. Moreover, there is no guarantee that doing so, the learned model would be a good approximate of the joint optimal model which is important as the third term in the right hand side denotes in Eq. (6). We cannot even approximate in UDA framework as the there is no labeled data in the target domain. In fact, this theorem justifies why minimizing the Wasserstein distance is not sufficient, and we should minimize the source empirical risk simultaneously, and learn jointly on both domains to consider all terms in Theorem 1. Using Theorem 1, we demonstrate why our algorithm can learn models that generalize well on the target domain. We also want to highlight once more that, although we minimize SWD in our framework and our theoretical results are driven for the Wasserstein distance, it has been theoretically demonstrated that SWD is a good approximation for computing the Wasserstein distance [2].

Theorem 2: Consider we use the pseudo-labeled target dataset , which we are confident with threshold , in an optimization iteration in the algorithm 1. Then, the following inequality holds:

(7)

where denote the expected risk of the optimally joint model on both the source domain and the confident pseudo-labeled target data points.

Proof: since the pseudo-labeled data points are selected according to the threshold , if we select a pseudo-labeled data point randomly, then the probability of the pseudo-label to be false is equal to . We can define the difference between the error based on the true labels and the pseudo-label for a particular data point as follows:

(8)

We can compute the expectation on the above error as:

(9)

Using Eq. (9) we can deduce:

(10)

Note that since Eq. (10) is valid for all , if we consider the joint optimal parameter in Eq. (10), we deduce:

(11)

By considering Theorem 1, where the pseudo-labeled data points are the given target dataset, and then applying Eq. (11) on Eq.(6), Theorem 2 follows.

Theorem 2 indicates that why our algorithm can potentially learn models that generalize well on the target domain. We can see that at any given iteration, we minimize the upperbound of the target error as given in (7). We minimize the source risk through the supervised loss. We minimize the Wasserstein distance by minimizing the SWD loss. The term is minimized because the pseudo-labeled data points by definition are selected such that the true labels can be predicted with high probability. Hence, the optimal model with parameter can perform well both on the source domain and the pseudo-labeled data points. The term is also small because we only select the confident data points. If (crucial) at a given iteration, minimizing the upperbound in Eq. (7) reduces the target true risk, then the class-conditional overlap between the latent distributions of source and target domains increases. This is because the trained model performance has improved on both domains (the source risk is always minimized directly). As a result, in the next iteration, the number of samples with confident pseudo-labels increases which in turn makes the upperbound of Eq. (7) tighter. As a result, the constant term in the right hand side of Eq. (7) (in the second line) decreases, making generalization tighter. Hence our algorithm minimizes all the terms in Eq. (7), which would reduce the true risk on the target domain as more optimization iterations are performed. However, this result is conditioned on existence of confident pseudo-labels which means the domains must be related.

Vi Experimental Validation

Fig. 2: The high-level system architecture, shown on the left, illustrates the data paths used during UDA training. On the right, t_SNE visualizations demonstrate how the embedding space evolves during training for the task. In the target domain, colored points are examples with assigned pseudo-labels, which increase in number with the confidence of the classifier.

We evaluate our algorithm using standard benchmark UDA tasks and compare against several UDA methods.

Datasets: We investigate the empirical performance of our proposed method on five commonly used benchmark datasets in UDA, namely: MNIST ([20], USPS ([21], Street View House Numbers, i.e., SVHN (), CIFAR (), and STL (). The first three datasets are 10 class (0-9) digit classification datasets. MNIST and USPS are collection of hand written digits whereas SVHN is a collection of real world RGB images of house numbers. STL and CIFAR contain RGB images that share 9 object classes: airplane, car, bird, cat, deer, dog, horse, ship, and truck. For the digit datasets, while six domain adaptation problems can be defined among these datasets, prior works often consider four of these six cases, as knowledge transfer from simple MNIST and USPS datasets to a more challenging SVHN domain does not seem to be tractable. Following the literature, we use 2000 randomly selected images from MNIST and 1800 images from USPS in our experiments for the case of and  [24]. In the remaining cases, we used full datasets. All datasets have their images scaled to pixels and the SVHN images are converted to grayscale as the encoder network is shared between the domains. CIFAR and STL maintain their RGB components. We report the target classification accuracy across the tasks.

Pre-training: Our experiments involve a pre-training stage to initialize the encoder and the classifier networks solely using the source data. This is an essential step because the combined deep network can generate confident pseudo-labels on the target domain only if initially trained on the related source domain. In other words, this initially learned network can be served as a naive model on the target domain. We then boost the performance on the target domain using our proposed algorithm, demonstrating that our algorithm is indeed effective for transferring knowledge. Doing so, we investigate a less-explored issue in the UDA literature. Different UDA approaches use considerably different networks, both in terms of complexity, e.g. number of layers and convolution filters, and the structure, e.g. using an auto-encoder. Consequently, it is ambiguous whether performance of a particular UDA algorithm is due to successful knowledge transfer from the source domain or just a good baseline network that performs well on the target domain even without considerable knowledge transfer from the source domain. To highlight that our algorithm can indeed transfer knowledge, we use three different network architectures: DRCN [11], VGG [39], and a small ResNet [17]. We then show that our algorithm can effectively boost base-line performance (statistically significant) regardless of the underlying network. In most of the domain adaptation tasks, we demonstrate that this boost indeed stems from transferring knowledge from the source domain. In our experiments we used Adam optimizer  [19] and set the pseudo-labeling threshold to .

Method
GtA 92.80.9 90.81.3 92.40.9 - - -
CoGAN 91.20.8 89.10.8 - - - -
ADDA 89.40.2 90.10.8 76.01.8 - - -
CyCADA 95.60.2 96.50.1 90.40.4 - - -
I2I-Adapt 92.1 87.2 80.3 - - -
FADA 89.1 81.1 72.8 78.3 - -
RevGrad 77.11.8 73.02.0 73.9 - - -
DRCN 91.80.1 73.70.04 82.00.2 - 58.90.1 66.40.1
AUDA - - 86.0 - - -
OPDA 70.0 60.2 - - - -
MML 77.9 60.5 62.9 - - -
Target (FS) 96.80.2 98.50.2 98.50.2 96.80.2 81.51.0 64.81.7
ResNet (FS) 95.80.4 98.00.5 98.00.5 95.80.4 81.31.6 55.21.5
VGG 90.12.6 80.25.7 67.32.6 66.72.7 53.81.4 63.41.2
DACAD 92.41.2 91.13.0 80.02.7 79.63.3 54.41.9 66.51.0
ResNet 86.66.8 82.99.3 62.33.6 64.33.2 44.30.8 63.31.2
DACAD 93.61.0 95.81.3 82.03.4 78.02.0 44.40.4 65.71.0
DRCN (Ours) 88.61.3 89.61.3 74.32.8 54.91.8 50.01.5 64.21.7
DACAD 94.50.7 97.70.3 88.22.8 82.62.9 55.01.4 65.91.4
TABLE I: Classification accuracy for UDA between MINIST, USPS, SVHN, CIFAR, and STL datasets. indicates use of full MNIST and USPS datasets as opposed to the subset described in the paper. indicates results from reimplementation in  [42].

Data Augmentation: Following the literature, we use data augmentation to create additional training data by applying reasonable transformations to input data in an effort to improve generalization [38]. Confirming the reported result in [11]

, we also found that geometric transformations and noise, applied to appropriate inputs, greatly improves performance and transferability of the source model to the target data. Data augmentation can help to reduce the domain shift between the two domains. The augmentations in this work are limited to translation, rotation, skew, zoom, Gaussian noise, Binomial noise, and inverted pixels.

Vi-a Results

Figure 2 demonstrates how our algorithm successfully learns an embedding with class-conditional alignment of distributions of both domains. This figure presents the two-dimensional t_SNE visualization of the source and target domain data points in the shared embedding space for the

task. The horizontal axis demonstrates the optimization iterations where each cell presents data visualization after a particular optimization iteration is performed. The top sub-figures visualize the source data points, where each color represents a particular class. The bottom sub-figures visualize the target data points, where the colored data points represent the pseudo-labeled data points at each iteration and the black points represent the rest of the target domain data points. We can see that, due to pre-training initialization, the embedding space is discriminative for the source domain in the beginning, but the target distribution differs from the source distributions. However, the classifier is confident about a portion of target data points. As more optimization iterations are performed, since the network becomes a better classifier for the target domain, the number of the target pseudo-labeled data points increase, improving our approximate of Eq. 

5. As a result, the discrepancy between the two distributions progressively decreases. Over time, our algorithm learns a shared embedding which is discriminative for both domains, making pseudo-labels a good prediction for the original labels, bottom, right-most sub-figure. This result empirically validates our theoretical justification on applicability of our algorithm to address UDA.

We also compare our results against several recent UDA algorithms in Table I. In particular, we compare against the recent adversarial learning algorithms: Generate to Adapt (GtA) [36], CoGAN [22], ADDA [42], CyCADA [18], and I2I-Adapt [25]. We also include FADA [24], which is originally a few-shot learning technique. For FADA, we list the reported one-shot accuracy, which is very close to the UDA setting (but it is arguably a simpler problem). Additionally, we have included results for RevGrad [9], DRCN [11], AUDA [35], OPDA [4], MML [37]. The latter methods are similar to our method because these methods learn an embedding space to couple the domains. OPDA and MML are more similar as they match distributions explicitly in the learned embedding. Finally, we have included the performance of fully-supervised (FS) learning on the target domain as an upper-bound for UDA. In our own results, we include the baseline target performance that we obtain by naively employing a DRCN network as well as target performance from VGG and ResNet networks that are learned solely on the source domain. We notice that in Table I

, our baseline performance is better than some of the UDA algorithms for some tasks. This is a very crucial observation as it demonstrates that, in some cases, a trained deep network with good data augmentation can extract domain invariant features that make domain adaptation feasible even without any further transfer learning procedure. The last row demonstrates that our method is effective in transferring knowledge to boost the baseline performance. In other words, Table 

I serves as an ablation study to demonstrate that that effectiveness of our algorithm stems from successful cross-domain knowledge transfer. We can see that our algorithm leads to near- or the state-of-the-art performance across the tasks. Additionally, an important observation is that our method significantly outperforms the methods that match distributions directly and is competent against methods that use adversarial learning. This can be explained as the result of matching distributions class-conditionally and suggests our second contribution can potentially boost performance of these methods. Finally, we note that our proposed method provide a statistically significant boost in all but two of the cases (shown in gray in Table I).

Vii Conclusions and Discussion

We developed a new UDA algorithm based on learning a domain-invariant embedding space. We map data points from two related domains to the embedding space such that discrepancy between the transformed distributions is minimized. We used the sliced Wasserstein distance metric as a measure to match the distributions in the embedding space. As a result, our method is computationally more efficient. Additionally, we matched distributions class-conditionally by assigning pseudo-labels to the target domain data. As a result, our method is more robust and outperforms prior UDA methods that match distributions directly. We provided theoretical justification for effectiveness of our approach and experimental validations to demonstrate that our method is competent against SOA recent UDA methods.

References

  • [1] N. Bonneel, J. Rabin, G. Peyré, and H. Pfister. Sliced and Radon Wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51(1):22–45, 2015.
  • [2] N. Bonnotte. Unidimensional and evolution methods for optimal transportation. PhD thesis, Paris 11, 2013.
  • [3] Mathieu Carriere, Marco Cuturi, and Steve Oudot. Sliced wasserstein kernel for persistence diagrams. arXiv preprint arXiv:1706.03358, 2017.
  • [4] N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. Optimal transport for domain adaptation. IEEE TPAMI, 39(9):1853–1865, 2017.
  • [5] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems, pages 2292–2300, 2013.
  • [6] B. Damodaran, B. Kellenberger, R. Flamary, D. Tuia, and N. Courty. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. arXiv preprint arXiv:1803.10081, 2018.
  • [7] H. Daumé III. Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815, 2009.
  • [8] I. Deshpande, Z. Zhang, and A. Schwing. Generative modeling using the sliced wasserstein distance. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 3483–3491, 2018.
  • [9] Y. Ganin and V. Lempitsky.

    Unsupervised domain adaptation by backpropagation.

    In ICML, 2014.
  • [10] Y. Ganin, E. Ustinova, H. Ajakan, P.l Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030, 2016.
  • [11] M. Ghifary, B. Kleijn, M. Zhang, D. Balduzzi, and W. Li. Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision, pages 597–613. Springer, 2016.
  • [12] X. Glorot, A. Bordes, and Y. Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, pages 513–520, 2011.
  • [13] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2066–2073. IEEE, 2012.
  • [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [15] R. Gopalan, R. Li, and R. Chellappa. Domain adaptation for object recognition: An unsupervised approach. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 999–1006. IEEE, 2011.
  • [16] A. Gretton, A. Smola, J. Huang, M. Schmittfull, K. Borgwardt, and B. Schölkopf. Covariate shift by kernel mean matching, 2009.
  • [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
  • [18] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In ICML, 2018.
  • [19] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [20] Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, pages 396–404, 1990.
  • [21] Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. Muller, E. Sackinger, et al. Comparison of learning algorithms for handwritten digit recognition. In International conference on artificial neural networks, volume 60, pages 53–60. Perth, Australia, 1995.
  • [22] M. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in neural information processing systems, pages 469–477, 2016.
  • [23] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016.
  • [24] S. Motiian, Q. Jones, S. Iranmanesh, and G. Doretto. Few-shot adversarial domain adaptation. In Advances in Neural Information Processing Systems, pages 6670–6680, 2017.
  • [25] Z. Murez, S. Kolouri, D. Kriegman, R. Ramamoorthi, and K. Kim. Image to image translation for domain adaptation. arXiv preprint arXiv:1712.00479, 2017.
  • [26] A. Oberman and Y. Ruan. An efficient linear programming method for optimal transportation. arXiv preprint arXiv:1509.03668, 2015.
  • [27] J. Rabin, G. Peyré, J. Delon, and M. Bernot. Wasserstein barycenter and its application to texture mixing. In International Conference on Scale Space and Variational Methods in Computer Vision, pages 435–446. Springer, 2011.
  • [28] J. Rabin, G. Peyré, J. Delon, and M. Bernot. Wasserstein barycenter and its application to texture mixing. In International Conference on Scale Space and Variational Methods in Computer Vision, pages 435–446. Springer, 2011.
  • [29] A. Redko, I.and Habrard and M. Sebban. Theoretical analysis of domain adaptation with optimal transport. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 737–753. Springer, 2017.
  • [30] M. Rostami, D. Huber, and T. Lu. A crowdsourcing triage algorithm for geopolitical event forecasting. In Proceedings of the 12th ACM Conference on Recommender Systems, pages 377–381. ACM, 2018.
  • [31] M. Rostami, S. Kolouri, E. Eaton, and K. Kim. Deep transfer learning for few-shot sar image classification. Remote Sensing, 11(11):1374, 2019.
  • [32] M. Rostami, S. Kolouri, E. Eaton, and K. Kim. Sar image classification using few-shot cross-domain transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
  • [33] K. Roth, A. Lucchi, S. Nowozin, and T. Hofmann. Stabilizing training of generative adversarial networks through regularization. In Advances in Neural Information Processing Systems, pages 2018–2028, 2017.
  • [34] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In European conference on computer vision, pages 213–226. Springer, 2010.
  • [35] K. Saito, Y. Ushiku, and T. Harada. Asymmetric tri-training for unsupervised domain adaptation. In ICML, 2018.
  • [36] S. Sankaranarayanan, Y. Balaji, C. D Castillo, and R. Chellappa. Generate to adapt: Aligning domains using generative adversarial networks. In CVPR, 2018.
  • [37] V. Seguy, B. B. Damodaran, R. Flamary, N. Courty, A. Rolet, and M. Blondel.

    Large-scale optimal transport and mapping estimation.

    In ICLR, 2018.
  • [38] P. Y. Simard, D. Steinkraus, and J. C. Platt.

    Best practices for convolutional neural networks applied to visual document analysis.

    In Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings., pages 958–963, Aug 2003.
  • [39] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [40] Umut Şimşekli, Antoine Liutkus, Szymon Majewski, and Alain Durmus. Sliced-wasserstein flows: Nonparametric generative modeling via optimal transport and diffusions. arXiv preprint arXiv:1806.08141, 2018.
  • [41] J Solomon, F. De Goes, G. Peyré, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas. Convolutional wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics (TOG), 34(4):66, 2015.
  • [42] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. In Computer Vision and Pattern Recognition (CVPR), volume 1, page 4, 2017.
  • [43] C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
  • [44] Z. Yi, H. Zhang, P. Tan, and M. Gong.

    Dualgan: Unsupervised dual learning for image-to-image translation.

    In ICCV, pages 2868–2876, 2017.