CUDA: Contradistinguisher for Unsupervised Domain Adaptation

09/08/2019 ∙ by Sourabh Balgi, et al. ∙ indian institute of science 28

In this paper, we propose a simple model referred as Contradistinguisher (CTDR) for unsupervised domain adaptation whose objective is to jointly learn to contradistinguish on unlabeled target domain in a fully unsupervised manner along with prior knowledge acquired by supervised learning on an entirely different domain. Most recent works in domain adaptation rely on an indirect way of first aligning the source and target domain distributions and then learn a classifier on a labeled source domain to classify target domain. This approach of an indirect way of addressing the real task of unlabeled target domain classification has three main drawbacks. (i) The sub-task of obtaining a perfect alignment of the domain in itself might be impossible due to large domain shift (e.g., language domains). (ii) The use of multiple classifiers to align the distributions unnecessarily increases the complexity of the neural networks leading to over-fitting in many cases. (iii) Due to distribution alignment, the domain-specific information is lost as the domains get morphed. In this work, we propose a simple and direct approach that does not require domain alignment. We jointly learn CTDR on both source and target distribution for unsupervised domain adaptation task using contradistinguish loss for the unlabeled target domain in conjunction with a supervised loss for labeled source domain. Our experiments show that avoiding domain alignment by directly addressing the task of unlabeled target domain classification using CTDR achieves state-of-the-art results on eight visual and four language benchmark domain adaptation datasets.



There are no comments yet.


page 1

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The recent success of deep neural networks in supervised learning tasks over several areas like computer vision, speech, natural language processing can be attributed to the models that are trained on large amounts of labeled data. However, acquiring large amounts of labeled data in some domains can be very expensive or not possible at all. Additionally, the amount of time required for labeling the data to use existing deep learning techniques can be very high initially for the new domain. This is referred as

cold-start. On the contrary, cost-effective unlabeled data can be easily obtained in large amounts for most new domains. So, one can aim to transfer the knowledge from a labeled source domain to perform tasks on an unlabeled target domain.

To study this, under the purview of transductive transfer learning, several approaches like domain adaptation, sample selection bias, co-variance shift have been explored in recent times. In this work, we study unsupervised domain adaptation by learning contrastive features in the unlabeled target domain in a fully unsupervised manner utilizing pre-existing informative knowledge from the labeled source domain.Existing domain adaptation approaches mostly rely on domain alignment, i.e., align both domains so that they are superimposed and indistinguishable. This domain alignment can be achieved in three main ways:

(a) discrepancy-based methods [DBLP:conf/iccv/HausserFMC17, 8578490, french2018selfensembling, DBLP:journals/corr/LouizosSLWZ15, 2017arXiv170208811Z], (b) reconstruction-based methods [10.1007/978-3-319-46493-0_36, Bousmalis:2016:DSN:3157096.3157135], and (c) adversarial adaptation methods [pmlr-v37-ganin15, NIPS2016_6544, 8099799, DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/cvpr/LiuYFWCW18, Russo_2018_CVPR, pmlr-v80-hoffman18a, xie2018learning, NIPS2018_7436, DBLP:conf/aaai/ChenCJJ19, shu2018a, hosseini-asl2018augmented].

Unlike above methods, our main motivation comes from the human ability to ‘contradistinguish’ and the fundamental idea of statistical learning as described by V. Vapnik [vapnik1999overview] that indicates any desired problem should be tried to solve in a most possible direct way rather than solving a more general intermediate task. In the context of domain adaptation, the desired problem is classification on the unlabeled target domain and domain alignment followed by most standard methods is the general intermediate. This motivates us to propose an approach that does not require domain alignment.

Our main contributions in this paper are as follows:

  1. We propose a simple method that directly addresses the problem of domain adaptation by learning a single classifier, which we refer to as Contradistinguisher (CTDR), jointly in an unsupervised manner over the unlabeled target space and in a supervised manner over the labeled source space. Hence, overcoming the drawbacks of distribution alignment based techniques.

  2. We formulate a ‘contradistinguish loss’ to directly utilize unlabeled target domain and address the classification task using unsupervised feature learning. A similar approach called DisCoder [Pandey2017UnsupervisedFL] was used for a much simpler task of semi-supervised feature learning on a single domain with no domain distribution shift.

  3. From our experiments, we show that by jointly training CTDR on the source and target domain distributions, we can achieve above/on-par results over several methods. Surprisingly, this simple method results in improvement over the state-of-the-art for eight challenging benchmark datasets in visual domains (USPS [lecun1989backpropagation], MNIST [lecun1998gradient], SVHN [37648], SYNNUMBERS [pmlr-v37-ganin15], CIFAR-10 [krizhevsky2009learning], STL-10 [coates2011analysis], SYNSIGNS [pmlr-v37-ganin15] and GTSRB [Stallkamp-IJCNN-2011]) and four benchmark language domains (Books, DVDs, Electronics, and Kitchen Appliances) of Amazon customer reviews sentiment analysis dataset [blitzer2006domain].

The rest of the paper is structured as follows. Section II discusses on related works in domain adaptation. In Section III

, we discuss the problem formulation, architecture, loss function definitions, algorithms, and complexity analysis of our proposed method CUDA. Section

IV deals with the discussion of the experimental setup, results and analysis on vision and language domains. Finally in Section V, we conclude by highlighting the key contributions of CUDA.

Ii Related Work

As mentioned earlier, almost all domain adaptation approaches rely on domain alignment techniques. Here we briefly discuss three main techniques of domain alignment.

(a) Discrepancy-based methods: Associative Domain Adaptation (ADA) [DBLP:conf/iccv/HausserFMC17] learns statistically domain invariant embeddings using association loss as an alternative to Maximum Mean Discrepancy (MMD) [Gretton:2009:FCK:2984093.2984169]. Maximum Classifier Discrepancy (MCD) [8578490] aligns source and target distributions by maximizing the discrepancy between two separate classifiers. Self Ensembling (SE) [french2018selfensembling] uses mean teacher variant [DBLP:conf/nips/TarvainenV17] of temporal ensembling [DBLP:conf/iclr/LaineA17]

with heavy reliance on data augmentation to minimize the discrepancy between student and teacher network predictions. Variational Fair Autoencoder (VFAE) 

[DBLP:journals/corr/LouizosSLWZ15] uses Variational Autoencoder (VAE) [DBLP:journals/corr/KingmaW13]

with MMD to obtain domain invariant features. Central Moment Discrepancy (CMD) 

[2017arXiv170208811Z] proposes to match higher order moments of source and target domain distributions.
(b) Reconstruction-based methods: Deep Reconstruction-Classification Networks (DRCN) [10.1007/978-3-319-46493-0_36] and Domain Separation Networks (DSN) [Bousmalis:2016:DSN:3157096.3157135] approaches learn a shared encodings of source and target domains using reconstruction networks. (c) Adversarial adaptation methods: Reverse Gradient (RevGrad/DANN) [pmlr-v37-ganin15, ganin2016domain] uses domain discriminator to learn domain invariant representations of both the domains. Coupled Generative Adversarial Network (CoGAN) [NIPS2016_6544] uses Generative Adversarial Network (GAN) [Goodfellow:2014:GAN:2969033.2969125] to obtain domain invariant features used for classification. Adversarial Discriminative Domain Adaptation (ADDA) [8099799] uses GANs along with weight sharing to learn domain invariant features. Generate to Adapt (G2A) [DBLP:conf/cvpr/Sankaranarayanan18a] learns to generate equivalent image in the other domain for a given image, thereby learning common domain invariant embeddings. Cross-Domain Representation Disentangler (CDRD) [DBLP:conf/cvpr/LiuYFWCW18] learns cross-domain disentangled features for domain adaptation. Symmetric Bi-Directional Adaptive GAN (SBADA-GAN) [Russo_2018_CVPR] aims to learn symmetric bidirectional mappings among the domains by trying to mimic a target image given a source image. Cycle-Consistent Adversarial Domain Adaptation (CyCADA) [pmlr-v80-hoffman18a] adapts representations at both the pixel-level and feature-level over the domains. Moving Semantic Transfer Network (MSTN) [xie2018learning] proposes moving semantic transfer network that learn semantic representations for the unlabeled target samples by aligning labeled source centroids and pseudo-labeled target centroids. Conditional Domain Adversarial Network (CDAN) [NIPS2018_7436] conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Joint Discriminative Domain Adaptation (JDDA) [DBLP:conf/aaai/ChenCJJ19] proposes joint domain alignment along with discriminative feature learning. Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) [shu2018a] and Augmented Cyclic Adversarial Learning (ACAL) [hosseini-asl2018augmented] learn by using a domain discriminator along with data augmentation for domain adaptation.

Apart from these standard ways, a slight deviant method explored is Tri-Training. Tri-Training algorithms use three classifiers trained on the labeled source domain and refine them for unlabeled target domain. To be precise, in each round of tri-training, a target sample is pseudo-labeled if the other two classifiers agree on the labeling, under certain conditions such as confidence thresholding. Asymmetric Tri-Training (ATT) [pmlr-v70-saito17a] uses three classifiers to bootstrap high confidence target domain samples by confidence thresholding. This way of bootstrapping works only if the source classifier has very high accuracy. In case of of low source classifier accuracy, target samples are never obtained to bootstrap, resulting in a bad model. Multi-Task Tri-training (MT-Tri) [DBLP:conf/acl/PlankR18] explores the tri-training technique on the language domain adaptation tasks.

All the domain adaptation approaches mentioned earlier have a common unifying theme: they attempt to morph the target and source distributions so as to make them indistinguishable. Once the two distributions are perfectly aligned, they use a classifier trained on labeled source domain to classify the unlabeled target domain. Hence, the performance of the classifier on the target domain depends crucially on the domain alignment. As a result, the actual task of target domain classification is solved indirectly using domain alignment rather than using the unlabeled target data in an unsupervised manner which is a more logical and direct way.

In this paper, we propose a completely different approach: instead of focusing on aligning the source and target distributions, we learn a single classifier referred as Contradistinguisher (CTDR), jointly on both the domain distributions using contradistinguish loss for the unlabeled target data and supervised loss for the labeled source data.

Iii Proposed Method: CUDA

A domain is specified by its input feature space , the label space

and the joint probability distribution

, where and . Let be the number of class labels such that for any instance . In particular, Domain adaptation consists of two domains and that are referred as the source and target domains respectively. A common assumption in domain adaptation is that the input feature space as well as the label space remains unchanged across the source and the target domain, i.e., and . Hence, the only difference between the source and target domain is input-label space distributions, i.e., . This is referred as domain shift in the standard literature of domain adaptation.

In particular, in an unsupervised domain adaptation, the training data consists of labeled source domain instances and unlabeled target domain instances . Given a labeled data in the source domain, it is straightforward to learn a classifier by maximizing the conditional probability over the labeled samples. However, the task at hand is to learn a classifier on the unlabeled target domain by transferring the knowledge from the labeled source domain.

Iii-a Overview

Fig. 1: Architecture of the proposed method CUDA with Contradistinguisher (Encoder and Classifier). Three optimization objectives with their respective inputs involved in training of CUDA: (i) Source supervised (2), (ii) Target unsupervised (5), and Adversarial regularization (9).

Figure 1 indicates the model architecture of our proposed method CUDA, i.e., Contradistinguisher (CTDR) and the respective losses involved in CUDA training.

The objective of CTDR is to find a clustering scheme using the most contrastive features on unlabeled target in such a way that it also satisfies the target domain prior, i.e., prior enforcing. We achieve this by jointly training labeled source samples in a supervised manner and unlabeled target samples in an unsupervised end-to-end manner by using a contradistinguish loss same as [Pandey2017UnsupervisedFL]. This fine-tunes the classifier learnt from source domain to the target domain. The main important feature of our approach is the contradistinguish loss (5) which is discussed in detail in Section III-C.

Note that the objective of the CTDR is not same as a classifier, i.e., distinguishing is not same as classifying. Suppose there are two contrastive entities and , where are two classes. The aim of a classifier is to classify and , where to train a classifier one requires labeled data. On the contrary, the job of a CTDR is to just identify , i.e., CTDR can classify (or ) and (or ) indifferently. To train CTDR, we do not need any class information but only need unlabeled entities and . Using unlabeled target data, CTDR is able to distinguish the samples in an unsupervised way. However, since the final task is classification, one would require a selective incorporation of the pre-existing informative knowledge required for the task of classification. This knowledge is obtained by jointly training, thus classifying and .

In the subsequent Sections III-BIII-E, we formulate the optimization objectives and also perform complexity analysis.

Iii-B Supervised Source Classification

For the labeled source domain instances , we define the conditional-likelihood of observing given as, , where denotes the parameters of CTDR.

We estimate

by maximizing the conditional log-likelihood of observing the labels given the labeled source domain samples. The source domain supervised objective to maximize


Alternatively, one can minimize the cross-entropy loss


where is the softmax output of CTDR that represents the probability of class for the given sample .

Iii-C Unsupervised Target Classification

For the unlabeled target domain instances , as the corresponding labels are unknown, a naive way of predicting the target labels is to directly use the classifier trained only with supervised loss (2). Though this gives some good results, it fails to achieve high accuracies due to two reasons: (i) is defined over and not . (ii) is not a valid probability distribution because .

Enforcing these two conditions, we model a non-trivial joint distribution

parameterized by over target domain as,


However (3) is not exactly a joint distribution yet because , i.e., marginalizing over all should yield the target prior distribution . We modify (3) so as to include the marginalization condition. We refer to this as target domain prior enforcing.


Note that defines a non-trivial approximate of joint distribution over the target domain as a function of learnt over source domain. The resultant unsupervised maximization objective for the target domain is given by maximizing the log-probability of the joint distribution which is


Next, we discuss how the objective (5) is solved and the reason why (5) is referred as contradistinguish loss. Since the target labels are unknown, one needs to maximize (5) over the parameters as well as the unknown target labels . As there are two parameters for maximization, we follow a two step approach to maximize (5). The two optimization steps are as follows.

(i) Pseudo-label selection: We maximize (5) only with respect to the label for every by fixing as
Pseudo-labeling approach under semi-supervised representation learning setting has been well studied in [pseudo-label] and shown equivalent to entropy regularization [grandvalet2005semi]. We derive the motivation from [Pandey2017UnsupervisedFL] that also use pseudo-labeling in the context of semi-supervised representation learning. However, our method addresses a more complex problem of domain adaptation in the presence of domain shift.
(ii) Maximization: By fixing the pseudo-labels from (6), we train CTDR to maximize (5) with respect to the parameter .
The first term, i.e., log-probability for a given forces CTDR to choose features to classify to . The second term is a constant, hence it has no effect in optimization with respect to . The third term is the negative of log-probability for all the samples in the entire domain. Maximization of this term forces CTDR to choose features to not classify all the other to selected pseudo-label except the given sample . This forces CTDR to extract the most unique features of a given sample against all the other samples , i.e., most unique contrastive feature of the selected sample over all the other samples to distinguish a given sample from all others. The first and third term together in (7) enforce that CTDR learns the most contradistinguishing features among the samples , thus performing unlabeled target domain classification in a fully unsupervised way. Because of this contradistinguishing feature learning, we refer the unsupervised target domain objective (5) as contradistinguish loss.

Ideally, one would like to compute the third term in (7) using the complete target training data for each input sample. Since it is expensive to compute the third term over the entire for each individual sample during training, one evaluates the third term in (7) over a mini-batch. In our experiments, we have observed that mini-batch strategy does not cause any problem during training as far as it includes at least one sample from each class which is guaranteed for a reasonably large mini-batch size of . For numerical stability, we use trick to optimize third term in (7).

Iii-D Adversarial Regularization

In order to prevent CTDR from over-fitting to the chosen pseudo labels during the training, we use adversarial regularization. In particular, we train CTDR to be confused about set of fake samples by maximizing the conditional log-probability over the given fake sample such that the sample belongs to all classes simultaneously. The objective of the adversarial regularization is to multi-label the fake sample (e.g., noisy image that looks like a cat and a dog) equally to all classes as labeling to any unique class introduces more noise in pseudo labels. This strategy is similar to entropy regularization [grandvalet2005semi] in the sense that instead of minimizing the entropy for the real target samples, we maximize the conditional log-probability over the fake samples. Therefore, we add the following maximization objective to the total CTDR objective as a regularizer.


for all . As maximization of (8) is analogous to minimize the binary cross-entropy loss (9) of a multi-class multi-label classification task, in our practical implementation, we minimize (9) for assigning labels to all the classes for every samples.


where is the softmax output of CTDR which represents the probability of class for the given sample .

The fake samples

can be directly sampled from, say a Gaussian distribution in the input feature space

with the mean and standard deviation of the samples

. For the language domain, fake samples are generated randomly as mentioned above. In case of image datasets, as the feature space is high dimensional, the fake images are generated using a generator network with parameter

that takes Gaussian noise vector

as input to produce a fake sample , i.e., . Generator is trained by minimizing kernel MMD loss [DBLP:conf/nips/LiCCYP17], i.e., a modified version of MMD loss between the encoder output and of fake images and real target domain images respectively.


where is the Gaussian kernel.

Note that the objective of the generator is not to generate realistic image but to generate fake noisy images with mixed image attributes from the target domain. This reduces the effort of training powerful generators which is the focus in adversarial based domain adaptation approaches [DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/cvpr/LiuYFWCW18, Russo_2018_CVPR, pmlr-v80-hoffman18a, xie2018learning] used for domain alignment.

Iii-E Algorithms and Complexity Analysis

Algorithm 1 and 2 list steps involved in CUDA training and inference respectively.

Input: , ,
Output:   // parameter of CTDR
Data: ,
2if target domain prior is known then
3       use for the contradistinguish loss (5)
5       compute assuming
6for  to  do
7       for  to  do
8             sample a mini-batch , compute  (1) using compute  (6) using compute  (7) fixing if adversarial regularization is enabled then
9                   if Generator is used then
10                         get fake samples from Gaussian noise vectors using , compute (10)
11                  else
12                         get fake samples by random sampling in the input feature space
13                  compute  (9) using
14            combine losses in steps 111, and 1 to compute gradients using backward-pass update using gradient descent
Algorithm 1 CUDA Training
Input:   // input test samples
Output:   // predicted labels
3 for  to  do
4       predict label as
Algorithm 2 CUDA Inference

Here we briefly discuss time complexity of Algorithm 1 and 2. We also compare model complexity of CUDA against domain alignment approaches.

(a) Time complexity: We consider a batch of instances for forward and backward propagation during training. For source supervised loss (2), the time complexity is , where is the time complexity involved in obtaining the classifier output which mainly depends on the model complexity which will be discussed next. For target unsupervised loss (5), the time complexity is for pseudo-label selection and for first and third terms in maximization step, i.e., effectively for the target unsupervised loss (5). The adversarial regularization loss (9) complexity corresponds to . Time complexity for generator training is , where is dimension of the encoder output and is the time complexity for the encoder output from neural network which also depends on the model complexity discussed next. As dominates , total training time complexity can be further simplified to . During inference phase, the time complexity is , where is the number of inference samples. (b) Model complexity: As discussed above,

mainly depends on the model complexity involving many factors such as input feature dimension, number of neural network layers, type of normalization, type of activation functions etc. CTDR is a simple network with a single encoder and classifier unlike MCD-DA that uses a single encoder with two classifier. This makes MCD-DA time complexity

instead of just . Similarly, SE uses 2 copies of network of encoder and classifier one for student and other for teacher network. This makes SE time complexity instead of . In general, as domain alignment approaches use additional circuitry either in-terms of multiple classifiers or GANs, the model complexity increases at least by a factor of 2. This increased model complexity requires more data augmentation to prevent under-fitting leading to further increases in time complexity at the expense of only a slight improvement, if any, compared to CUDA as indicated by our state-of-the-art results without any data augmentation in both visual and language domain adaptation tasks. We observed empirically, most of the computational complexity is for the forward and backward propagation to obtain the classifier softmax output and the gradients, i.e., . Hence the use of GPUs to accelerate . We believe the trade-off achieved by the simplicity of CUDA, as evident from our results, is very desirable compared to most domain alignment approaches that use data augmentation and complex neural networks for a slight improvement, if any.

Iv Experiments

Fig. 2: Illustrations of random samples from (a) , (b) , (c) , (d) , (e) , (f) , (g) and (h) . (from top to bottom row.)
Dataset # Train # Test # Classes Target Resolution Channels
USPS () 7,291 2,007 10 Digits 16 16 Mono
MNIST () 60,000 10,000 10 Digits 28 28 Mono
SVHN () 73,257 26,032 10 Digits 32 32 RGB
SYNNUMBERS () 479,400 9,553 10 Digits 32 32 RGB
CIFAR-9 () 45,000 9,000 9 Object ID 32 32 RGB
STL-9 () 4,500 7,200 9 Object ID 96 96 RGB
SYNSIGNS () 100,000 - 43 Traffic Signs 40 40 RGB
GTSRB () 39,209 12,630 43 Traffic Signs varies RGB
TABLE II: Details of language dataset (Amazon customer reviews for sentiment analysis).
Domain # Train # Test
Books () 2,000 4,465
DVDs () 2,000 3,586
Electronics () 2,000 5,681
Kitchen Appliances () 2,000 5,945
TABLE I: Details of image datasets.
(a) Before training
(b) after 1 training
(c) after 6 training
(d) after full training
Fig. 7: t-SNE [vandermaaten2008visualizing] plots for embeddings from the output of CTDR before applying softmax corresponding to the test samples from visual task trained with CUDA. We consider this task as this is the most difficult among all the visual experiments due contrasting domains with high domain shift. (a) Initial plot of all the test samples before training indicating domain shift as there are two separate clusters for each domain. (b) Plot of subset from test samples after . (c) Plot of subset from test samples after . (d) Plot of subset from test samples after full CUDA training.
(e) CIFAR-9STL-9
(f) STL-9CIFAR-9
Fig. 16: t-SNE [vandermaaten2008visualizing] plots for embeddings from the output of CTDR before applying softmax corresponding to the test samples in visual experiments.
ADA [DBLP:conf/iccv/HausserFMC17] - - 97.16 - - - 91.86 97.66
MCD-DA [8578490] 94.10 94.20 96.20 - - - - 94.40
DRCN [10.1007/978-3-319-46493-0_36] 73.67 91.80 81.97 40.05 66.37 58.65 - -
DSN [Bousmalis:2016:DSN:3157096.3157135] - - 82.70 - - - 91.20 93.10
RevGrad [pmlr-v37-ganin15] 74.01 91.11 73.91 35.67 66.12 56.91 91.09 88.65
CoGAN [NIPS2016_6544] 89.10 91.20 - - - - - -
ADDA [8099799] 90.10 89.40 76.00 - - - - -
G2A [DBLP:conf/cvpr/Sankaranarayanan18a] 90.80 92.50 84.70 36.40 - - - -
CDRD [DBLP:conf/cvpr/LiuYFWCW18] 94.35 95.05 - - - - - -
SBADA-GAN [Russo_2018_CVPR] 95.00 97.60 76.10 61.10 - - - 96.70
CyCADA [pmlr-v80-hoffman18a] 96.50 95.60 90.40 - - - - -
MSTN [xie2018learning] - 92.90 91.70 - - - - -
CDAN [NIPS2018_7436] 97.10 96.50 90.50 - - - - -
JDDA [DBLP:conf/aaai/ChenCJJ19] 96.70 - 94.20 - - - - -
ATT [pmlr-v70-saito17a] - - 86.20 52.80 - - 93.10 96.20
CUDA (Ours) 99.20 97.86 99.07 71.30 77.22 65.93 94.30 99.40
(Ours) 99.64 97.98 99.64 96.02 73.78 91.46 96.85 98.23
(Ours) 81.18 82.00 77.54 24.86 77.64 62.10 91.45 95.13
(Ours) 98.83 97.71 98.81 50.83 77.22 62.50 93.65 98.15
(Ours) 98.77 97.86 98.62 54.38 76.93 61.09 93.52 97.86
(Ours) 99.20 97.31 98.85 54.32 76.18 59.37 93.59 99.40
(Ours) 89.97 93.87 97.15 41.71 75.00 56.99 90.79 99.35
(Ours) 98.75 96.26 95.73 55.25 70.93 61.37 92.97 99.11
SE [french2018selfensembling] 99.54 98.26 99.26 97.00 80.09 74.24 97.11 99.37
DIRT-T [shu2018a] - - 99.40 54.50 - 73.30 96.20 99.60
ACAL [hosseini-asl2018augmented] 97.16 98.31 96.51 60.85 - - 97.98 -
TABLE III: Target domain test accuracy (%) on image datasets. CUDA corresponds to our best results obtained with best hyper-parameter settings. : target supervised, : source supervised, : target unsupervised, : source unsupervised, : adversarial regularization and : source adversarial regularization represents different training configurations. We exclude [french2018selfensembling, shu2018a, hosseini-asl2018augmented] from comparison as they use heavy data augmentation.
Fig. 21: t-SNE [vandermaaten2008visualizing] plots for embeddings from the output of CTDR corresponding to the test samples in language experiments . (Note: For the sake of the brevity, we do not add the plots for all the language tasks as language tasks plots are almost similar and not as informative as visual tasks.)
VFAE [DBLP:journals/corr/LouizosSLWZ15] 79.90 79.20 81.60 75.50 78.60 82.20 72.70 76.50 85.00 72.00 73.30 83.80 78.35
CMD [2017arXiv170208811Z] 80.50 78.70 81.30 79.50 79.70 83.00 74.40 76.30 86.00 75.60 77.50 85.40 79.82
DANN [ganin2016domain] 78.40 73.30 77.90 72.30 75.40 78.30 71.30 73.80 85.40 70.90 74.00 84.30 76.27
ATT [pmlr-v70-saito17a] 80.70 79.80 82.50 73.20 77.00 82.50 73.20 72.90 86.90 72.50 74.90 84.60 78.39
MT-Tri [DBLP:conf/acl/PlankR18] 78.14 81.45 82.14 74.86 81.45 82.14 74.86 78.14 82.14 74.86 78.14 81.45 79.14
CUDA (Ours) 82.77 83.07 85.58 80.02 82.06 85.70 75.88 76.05 87.30 73.08 73.06 86.66 80.93
(Ours) 83.83 87.19 89.05 84.08 87.19 89.05 84.08 83.83 89.05 84.08 83.83 87.19 86.03
(Ours) 81.07 75.11 77.53 77.67 75.99 79.78 73.12 74.48 86.19 72.59 76.24 85.92 77.97
(Ours) 81.99 81.45 84.36 77.18 81.48 84.37 67.26 67.71 87.30 70.68 71.97 84.79 78.37
(Ours) 82.63 81.73 83.75 75.88 77.45 80.96 69.70 70.69 87.37 72.99 67.76 84.51 77.91
(Ours) 82.77 83.07 85.58 80.02 82.06 85.70 75.88 76.05 87.30 73.08 73.06 86.66 80.93
(Ours) 80.37 80.20 84.58 78.45 81.36 85.03 75.05 75.01 87.47 72.63 71.97 86.31 79.86
TABLE IV: Target domain test accuracy (%) on Amazon customer reviews dataset for Sentiment Analysis. CUDA corresponds to our best results obtained with best hyper-parameter settings. : target supervised, : source supervised, : target unsupervised, : source unsupervised and : adversarial regularization represents different training configurations.

Iv-a Experimental Setup

Iv-A1 Visual Domain Adaptation

We consider eight benchmark visual datasets with 3 different nature of images for our visual domain experiments. (a) Digits: USPS ([lecun1989backpropagation] and MNIST ([lecun1998gradient] are a pair of gray-scale digits datasets. SVHN ([37648] and SYNNUMBERS ([pmlr-v37-ganin15] are another pair of RGB digits datasets. (b) Objects: CIFAR ([krizhevsky2009learning] and STL ([coates2011analysis] are a dataset pair of objects/animals RGB images by considering only the 9 overlapping classes from the original datasets. (c) Traffic Signs: SYNSIGNS ([pmlr-v37-ganin15] and GTSRB ([Stallkamp-IJCNN-2011] are a dataset pair with traffic signs. Table II provides visual dataset details and Figure 2 indicates some random samples from all eight datasets.

On these datasets, we consider eight main domain adaptation tasks studied in [pmlr-v37-ganin15, french2018selfensembling]. These eight visual tasks and the data processing considered are as follows, (i) :

images are up-scaled using bi-linear interpolation from 16

161 to 28281 to match the size of , (ii) : images are up-scaled using bi-linear interpolation to 32321. The RGB channels of are converted to Mono image resulting in 32321 size. Several other combinations were tried and this was chosen since the results are the best, (iii) : No pre-processing required as these domains have same image size, (iv) : Only the 9 overlapping classes from datasets as the label space should be same for both the domain. images are down-scaled from 96963 to 32323 to match the size of . (v) : Crop the images to 40403 based on the region of interest in the images in both datasets.

Note that we do not perform any image data augmentation in our experiments unlike [french2018selfensembling]. Our aim in this paper is to demonstrate that the proposed method performs above/on-par without data augmentation as data augmentation is expensive and not always possible as seen in language tasks.

Iv-A2 Language Domain Adaptation

We consider four benchmark language domains (i) Books (), (ii) DVDs (), (iii) Electronics (), and (iv) Kitchen Appliances () from Amazon customer reviews [blitzer2006domain] dataset. The dataset includes product reviews in four different domains for sentiment analysis as indicated in Table II.

On these domains, we consider all twelve tasks studied in [ganin2016domain, DBLP:journals/corr/LouizosSLWZ15, 2017arXiv170208811Z, pmlr-v70-saito17a, DBLP:conf/acl/PlankR18]. We use the same neural networks and text pre-processing used in [Chen:2012:MDA:3042573.3042781, ganin2016domain, DBLP:conf/acl/PlankR18] to get 5000 dimensional feature vector. We assign binary label ‘0’ for the products rated from stars and ‘1’ for star ratings.

We select the best existing neural networks without major modifications to hyper-parameters so as to demonstrate the effectiveness of CUDA. All the experiments are done using PyTorch 

[paszke2017automatic] with mini-batch size of 64 per GPU distributed over four GPUs, Adam optimizer with an initial learning rate and decay rate of

every 30 epochs.

Iv-B Experimental Results

We use the same metric used for evaluation as in [pmlr-v37-ganin15, NIPS2016_6544, 10.1007/978-3-319-46493-0_36, pmlr-v70-saito17a, DBLP:conf/iccv/HausserFMC17, 8099799, DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/cvpr/LiuYFWCW18, 8578490, Russo_2018_CVPR, pmlr-v80-hoffman18a, xie2018learning, NIPS2018_7436, DBLP:conf/aaai/ChenCJJ19, french2018selfensembling, shu2018a, hosseini-asl2018augmented, ganin2016domain, DBLP:journals/corr/LouizosSLWZ15, 2017arXiv170208811Z, DBLP:conf/acl/PlankR18, Bousmalis:2016:DSN:3157096.3157135], i.e., the accuracy on target domain test set. Table III indicates the target domain test accuracy across all the eight main domain adaptation tasks compared with several state-of-the-art domain alignment methods [pmlr-v37-ganin15, NIPS2016_6544, 10.1007/978-3-319-46493-0_36, pmlr-v70-saito17a, DBLP:conf/iccv/HausserFMC17, 8099799, DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/cvpr/LiuYFWCW18, 8578490, Russo_2018_CVPR, pmlr-v80-hoffman18a, xie2018learning, NIPS2018_7436, DBLP:conf/aaai/ChenCJJ19, french2018selfensembling, shu2018a, hosseini-asl2018augmented, Bousmalis:2016:DSN:3157096.3157135]. Table IV indicates the target domain test accuracy across all the twelve domain adaptation tasks compared with different state-of-the-art methods [ganin2016domain, DBLP:journals/corr/LouizosSLWZ15, 2017arXiv170208811Z, pmlr-v70-saito17a, DBLP:conf/acl/PlankR18].

Apart from the standard domain alignment methods used for comparison, we report two baselines and of our own, reported in Tables III and IV, by fixing the CTDR neural network architecture and varying only the training losses used to demonstrate the effectiveness of CUDA. indicates training CTDR using only the target domain in a fully supervised way. indicates training CTDR using only the source domain in a fully supervised way. and respectively indicates the maximum and minimum target domain test accuracy that can be attained with chosen CTDR neural network.

Comparing CUDA with in Tables III and IV, we can see huge improvements in the target domain test accuracies due to the use of contradistinguish loss (5) demonstrating the effectiveness of CTDR.

As our method is mainly dependent on the contradistinguish loss (5), experimenting with better neural networks along with our contradistinguish loss (5), we observed better results in both visual and language domain adaptation task over the neural networks used in [8099799, 8578490] on visual experiments and MAN [DBLP:conf/naacl/ChenC18] on language experiments.

Iv-C Analysis of Experimental Results

Iv-C1 Visual Domain Adaptation

In tasks and , CUDA in Table III. is poor because causing under-fitting during only target domain supervised loss training. The improved results of CUDA indicates that CTDR is able contradistinguish on the target domain along with the transfer of informative knowledge required for the classification from a larger source domain. This indicates that CTDR is indeed successful in contradistinguishing on a relatively small set of unlabeled target domain using larger source domain information. Other interesting observation is in the task , where is slightly better than CUDA. This is due to slight over-fitting on the target domain training examples which are actually non-informative for classification leading to a small decrease in the target domain test accuracy. indicates source domain has more information than target domain due to large source and small target training sets. Figure 7(a-d) shows t-SNE plots for as the training progresses using CUDA. We indicate these plots as this is the most difficult among all the visual experiments due contrasting domains. Figure 16(a-h) shows t-SNE plots on the test sample outputs of CTDR for all eight visual experiments and they show clear class-wise clustering on both source and target domains indicating the efficacy of CUDA.

Iv-C2 Language Domain Adaptation

In task , CUDA because of slight over-fitting on source domain. Figure 21(a-d) show the t-SNE plots of top four language tasks indicating classes being oriented on either half of the line like clustering.

V Conclusion

In this paper, we have proposed a simple and direct approach that addresses the problem of unsupervised domain adaptation that is different from the standard distribution alignment approaches. In our approach, we jointly learn a Contradistinguisher (CTDR) on the source and target domain distribution in the same input feature space using contradistinguish loss for unsupervised target domain to identify contrastive features. We have shown that the contrastive learning overcomes the need and drawbacks of domain alignment, especially in tasks where domain shift is very high (e.g., language domains) and data augmentation techniques cannot be applied. Due to the inclusion of prior enforcing in the contradistinguish loss, the proposed unsupervised domain adaptation method CUDA could incorporate any known target domain prior to overcome the drawbacks of skewness in the target domain, thereby resulting in a skew-robust model. We demonstrated the effectiveness of our model by achieving state-of-the-art results on all the visual domain adaptation tasks over eight different benchmark visual datasets and nine language domain adaptation tasks out of twelve along with the best mean test accuracy of all the twelve tasks on benchmark Amazon customer reviews sentiment analysis dataset. Specifically, the results in language domains reinforced the efficacy of CUDA on being robust to high sparsity or high domain shift tasks that pose challenges to standard domain alignment approaches.


The authors would like to thank Ministry of Human Resource Development (MHRD), Government of India, for their generous funding towards this work through UAY Project: IISc 001 and IISc 010.