The recent success of deep neural networks for supervised learning tasks in several areas like computer vision, speech, natural language processing can be attributed to the models that are trained on large amounts of labeled data. However, acquiring massive amounts of labeled data in some domains can be very expensive or not possible at all. Additionally, the amount of time required for labeling the data to use existing deep learning techniques can be very high initially for a new domain. This is known ascold-start
. On the contrary, cost-effective unlabeled data can be easily obtained in large amounts for most new domains. So, one can aim to transfer the knowledge from a labeled source domain to perform tasks on an unlabeled target domain. To study this, under the purview of transductive transfer learning, several approaches like domain adaptation, sample selection bias, co-variance shift have been explored in recent times.
Existing domain adaptation approaches mostly rely on domain alignment, i.e., align both domains so that they are superimposed and indistinguishable in the latent space. This domain alignment can be achieved in three main ways: (a) discrepancy-based methods [DBLP:conf/icml/LongC0J15, DBLP:conf/nips/LongZ0J16, DBLP:conf/icml/LongZ0J17, DBLP:conf/iccv/HausserFMC17, 8578490, french2018selfensembling, DBLP:journals/corr/LouizosSLWZ15, 2017arXiv170208811Z, rozantsev2018beyond, 8792192, mancini2018boosting, cariucci2017autodial, carlucci2017just], (b) reconstruction-based methods [10.1007/978-3-319-46493-0_36, Bousmalis:2016:DSN:3157096.3157135], and (c) adversarial adaptation methods [pmlr-v37-ganin15, NIPS2016_6544, 8099799, DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/cvpr/LiuYFWCW18, Russo_2018_CVPR, pmlr-v80-hoffman18a, xie2018learning, NIPS2018_7436, DBLP:conf/aaai/ChenCJJ19, shu2018a, hosseini-asl2018augmented, liang2018aggregating, xu2018deep].
These domain alignment strategies of indirectly addressing the task of unlabeled target domain classification have three main drawbacks. • (i) The sub-task of obtaining a perfect alignment of the domain in itself might be impossible or very difficult due to large domain shift (e.g., language domains). (ii) The use of multiple classifiers and/or GANs to align the distributions unnecessarily increases the complexity of the neural networks leading to over-fitting in many cases. (iii) Due to distribution alignment, the domain-specific information is lost as the domains get morphed.
A particular case where the domain alignment and the classifier trained on the source domain might fail is in the case that the target domain is more suited to classification task than the source domain which has lower classification performance. In this case, it is advised to perform the classification directly on the unlabeled target domain in an unsupervised manner as domain alignment onto less suited source domain only leads to loss of information. It is reasonable to assume that for the main objective of unlabeled target domain classification, one can use all the information in the target domain and optionally incorporate any useful information from the labeled source domain and not the other way around. These drawbacks push us to challenge the idea of solving domain adaptation problems without solving the general problem of domain alignment.
In this work, we study unsupervised domain adaptation by learning contrastive features in the unlabeled target domain in a fully unsupervised manner with the help of classifier simultaneously trained on the labeled source domain. We derive our motivation from the philosophy of Vapnik [DBLP:books/sp/95/V1995, vapnik1999overview, DBLP:books/daglib/0026015, gong2007machine] that states any desired problem should be solved in a most possible direct way rather than solving a more general intermediate task. Considering the various drawback of domain alignment approach and based on Vapnik’s philosophy, in this paper, we propose a method for domain adaptation that does not require domain alignment and approach the problem directly.
This work extends our earlier conference paper [DBLP:conf/icdm/BalgiD19, DBLP:journals/corr/abs-1909-03442] in the following way. (i) We provide additional experimental results on more complex domain adaptation dataset Office-31 [DBLP:conf/eccv/SaenkoKFD10] which includes images from three different sources, AMAZON (), DSLR (), and WEBCAM () categorized into three domains respectively with only few labeled high resolution images. (ii) We provide several ablation studies and demonstrations that will provide insights into the working of our proposed method CUDA [DBLP:conf/icdm/BalgiD19, DBLP:journals/corr/abs-1909-03442]. (iii) We extend our algorithm to the case of multi-source domain adaptation and establish benchmark results.
A summary of our contributions in this paper are as follows.
We propose a simple method Contradistinguisher for Unsupervised Domain Adaptation (CUDA) that directly addresses the problem of domain adaptation by learning a single classifier, which we refer to as Contradistinguisher, jointly in an unsupervised manner over the unlabeled target domain and in a supervised manner over the labeled source domain. Hence, overcoming the drawbacks of distribution alignment based techniques.
We formulate a ‘contradistinguish loss’ to directly utilize unlabeled target domain and address the classification task using unsupervised feature learning. Note that a similar approach called DisCoder [Pandey2017UnsupervisedFL] was used for a much simpler task of semi-supervised feature learning on a single domain with no domain distribution shift.
We extend our experiments to more complex domain adaptation dataset Office-31 [DBLP:conf/eccv/SaenkoKFD10] which includes images from three different sources, AMAZON (), DSLR (), and WEBCAM () categorized into three domains respectively. Unlike simpler datasets ( USPS () [lecun1989backpropagation], MNIST () [lecun1998gradient], SVHN () , SYNNUMBERS () [pmlr-v37-ganin15], CIFAR-10 () [krizhevsky2009learning], STL-10 () [coates2011analysis], SYNSIGNS () [pmlr-v37-ganin15], and GTSRB () [Stallkamp-IJCNN-2011]) explored in [DBLP:conf/icdm/BalgiD19, DBLP:journals/corr/abs-1909-03442], Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset includes very few images of the order of hundreds with high resolution images and varying backgrounds. From our experiments, we show that by jointly training contradistinguisher on the source domain and the target domain distributions, we can achieve above/on-par results over several domain adaptation methods.
We further demonstrate the simplicity and effectiveness of our proposed method by easily extending single-source domain adaptation to a more general multi-source domain adaptation. We demonstrate the effectiveness of the multi-source domain adaptation extension by performing experiments on Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset in a multi-source setting.
Apart from these real-world benchmark datasets, we also validate the proposed method using the synthetically created toy-datasets. We use [pedregosa2011scikit] to generate blobs (point clouds) with different source and target domain distribution shapes and orientations and perform simulation of our proposed method.
In Fig. 5 we demonstrate the difference between domain alignment and the proposed method CUDA by swapping the domains. One can see that while domain alignment approaches learn classifier only on source domain, the Contradistinguisher jointly learn to classify both the domains. Due to this joint learning we observe an added nice behavior of obtaining similar classifiers irrespective of the domain being used as the source domain.
, we elaborate on the problem formulation, neural network architecture used by us, loss functions, model training and inference algorithms, and complexity analysis of our proposed method. Section4 deals with the discussion of the experimental setup, results and analysis on vision and language domains. Finally in Section 5, we conclude by highlighting the key contributions of CUDA.
2 Related Work
As mentioned earlier, almost all domain adaptation approaches rely on domain alignment techniques. Here we briefly discuss three main techniques of domain alignment.
(a) Discrepancy-based methods:
Deep Adaptation Network (DAN) [DBLP:conf/icml/LongC0J15] proposes mean-embedding matching of multi-layer representations across domain by minimizing Maximum Mean Discrepancy (MMD) [Gretton:2009:FCK:2984093.2984169, gretton2012kernel, sejdinovic2013equivalence] in a reproducing kernel Hilbert space (RKHS).
Residual Transfer Network (RTN) [DBLP:conf/nips/LongZ0J16] introduces separate source and target domain classifiers differing by a small residual function along with fusing the features of multiple layers in a reproducing kernel Hilbert space (RKHS) to match the domain distributions.
Joint Adaptation Network (JAN) [DBLP:conf/icml/LongZ0J17] proposes to optimize Joint Maximum Mean Discrepancy (JMMD), which measures the Hilbert-Schmidt norm between kernel mean embedding of empirical joint distributions of source and target domain.
Associative Domain Adaptation (ADA) with heavy reliance on data augmentation to minimize the discrepancy between student and teacher network predictions.
Variational Fair Autoencoder (VFAE) with MMD to obtain domain invariant features.
Central Moment Discrepancy (CMD) performs domain adaptation by first generating two subspaces of the source and the target domains by performing PCA, followed by learning finite number of the interpolated subspaces between source and target subspaces based on the geometric properties of the Grassmann manifold. In the presence of multi-source domains, this method is very effective as this identifies the optimal subspace for domain adaptation.
sFRAME (sparse Filters, Random fields, And Maximum Entropy)
proposes to optimize Joint Maximum Mean Discrepancy (JMMD), which measures the Hilbert-Schmidt norm between kernel mean embedding of empirical joint distributions of source and target domain. Associative Domain Adaptation (ADA)[DBLP:conf/iccv/HausserFMC17] learns statistically domain invariant embeddings by associating the embeddings of the final fully-connected layer before applying softmax as an alternative to Maximum Mean Discrepancy (MMD) [Gretton:2009:FCK:2984093.2984169, gretton2012kernel, sejdinovic2013equivalence] loss. Maximum Classifier Discrepancy (MCD)  aligns source and target distributions by maximizing the discrepancy between two separate classifiers. Self Ensembling (SE) [french2018selfensembling] uses mean teacher variant [DBLP:conf/nips/TarvainenV17] of temporal ensembling [DBLP:conf/iclr/LaineA17]
with heavy reliance on data augmentation to minimize the discrepancy between student and teacher network predictions. Variational Fair Autoencoder (VFAE)[DBLP:journals/corr/LouizosSLWZ15] uses Variational Autoencoder (VAE) [DBLP:journals/corr/KingmaW13]
with MMD to obtain domain invariant features. Central Moment Discrepancy (CMD)[2017arXiv170208811Z] proposes to match higher order moments of source and target domain distributions. Rozantsev et. al. [rozantsev2018beyond] propose to explicitly model the domain shift using two-stream architecture, one for each domain along with MMD to align the source and target representations. A more recent approach multi-domain Domain Adaptation layer (mDA-layer) [8792192, mancini2018boosting] proposes a novel idea of replacing standard Batch-Norm layers [ioffe2015batch] with specialized Domain Alignment layers [cariucci2017autodial, carlucci2017just] thereby reducing the domain shift by discovering and handling multiple latent domains. Geodesic Flow Subspaces (GFS/SGF) [gopalan2011domain]
performs domain adaptation by first generating two subspaces of the source and the target domains by performing PCA, followed by learning finite number of the interpolated subspaces between source and target subspaces based on the geometric properties of the Grassmann manifold. In the presence of multi-source domains, this method is very effective as this identifies the optimal subspace for domain adaptation. sFRAME (sparse Filters, Random fields, And Maximum Entropy)[xie2015learning] models are defined as Markov random field model that model data distributions based as maximum entropy distribution to fit the observed data by identifying the patterns in the observed data. (b) Reconstruction-based methods: Deep Reconstruction-Classification Networks (DRCN) [10.1007/978-3-319-46493-0_36] and Domain Separation Networks (DSN) [Bousmalis:2016:DSN:3157096.3157135] approaches learn a shared encodings of source and target domains using reconstruction networks. (c) Adversarial adaptation methods: Reverse Gradient (RevGrad) [pmlr-v37-ganin15] or Domain Adversarial Neural Network (DANN) [ganin2016domain] uses domain discriminator to learn domain invariant representations of both the domains. Coupled Generative Adversarial Network (CoGAN) [NIPS2016_6544] uses Generative Adversarial Network (GAN) [Goodfellow:2014:GAN:2969033.2969125] to obtain domain invariant features used for classification. Adversarial Discriminative Domain Adaptation (ADDA)  uses GANs along with weight sharing to learn domain invariant features. Generate to Adapt (G2A) [DBLP:conf/cvpr/Sankaranarayanan18a] learns to generate equivalent image in the other domain for a given image, thereby learning common domain invariant embeddings. Cross-Domain Representation Disentangler (CDRD) [DBLP:conf/cvpr/LiuYFWCW18] learns cross-domain disentangled features for domain adaptation. Symmetric Bi-Directional Adaptive GAN (SBADA-GAN) [Russo_2018_CVPR] aims to learn symmetric bidirectional mappings among the domains by trying to mimic a target image given a source image. Cycle-Consistent Adversarial Domain Adaptation (CyCADA) [pmlr-v80-hoffman18a] adapts representations at both the pixel-level and feature-level over the domains. Moving Semantic Transfer Network (MSTN) [xie2018learning] proposes moving semantic transfer network that learn semantic representations for the unlabeled target samples by aligning labeled source centroids and pseudo-labeled target centroids. Conditional Domain Adversarial Network (CDAN) [NIPS2018_7436] conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Joint Discriminative Domain Adaptation (JDDA) [DBLP:conf/aaai/ChenCJJ19] proposes joint domain alignment along with discriminative feature learning. Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) [shu2018a] and Augmented Cyclic Adversarial Learning (ACAL) [hosseini-asl2018augmented] learn by using a domain discriminator along with data augmentation for domain adaptation. Deep Cocktail Network (DCTN) [xu2018deep] proposes a k-way domain discriminator and category classifier for digit classification and real-world object recognition in a multi-source domain adaptation setting.
Apart from these approaches, a slightly different method that has been recently proposed is called Tri-Training. Tri-Training algorithms use three classifiers trained on the labeled source domain and refine them for unlabeled target domain. To be precise, in each round of tri-training, a target sample is pseudo-labeled if the other two classifiers agree on the labeling, under certain conditions such as confidence thresholding. Asymmetric Tri-Training (ATT) [pmlr-v70-saito17a] uses three classifiers to bootstrap high confidence target domain samples by confidence thresholding. This way of bootstrapping works only if the source classifier has very high accuracy. In case of of low source classifier accuracy, target samples are never obtained to bootstrap, resulting in a bad model. Multi-Task Tri-training (MT-Tri) [DBLP:conf/acl/PlankR18] explores the tri-training technique on the language domain adaptation tasks in a multi-task setting.
All the domain adaptation approaches mentioned earlier have a common unifying theme: they attempt to morph the target and source distributions so as to make them indistinguishable. In this paper, we propose a completely different approach: instead of focusing on aligning the source and target distributions, we learn a single classifier referred as Contradistinguisher, jointly on both the domain distributions using contradistinguish loss for the unlabeled target domain data and supervised loss for the labeled source domain data.
3 Proposed Method: CUDA
A domain is specified by its input feature space , the label space
and the joint probability distribution, where and . Let be the number of class labels such that for any instance . Domain adaptation, in particular, consists of two domains and that are referred as the source and target domains respectively. A common assumption in domain adaptation is that the input feature space as well as the label space remains unchanged across the source and the target domain, i.e., and . Hence, the only difference between the source and target domain is input-label space distributions, i.e., . This is referred to as domain shift in the domain adaptation literature.
In particular, in an unsupervised domain adaptation, the training data consists of labeled source domain instances and unlabeled target domain instances . Given a labeled data in the source domain, it is straightforward to learn a classifier by maximizing the conditional probability over the labeled samples. However, the task at hand is to learn a classifier on the unlabeled target domain by transferring the knowledge from the labeled source domain.
The outline of the proposed method CUDA which involves contradistinguisher and the respective losses involved in training are depicted in Fig. 6 . The objective of contradistinguisher is to find a clustering scheme using the most contrastive features on unlabeled target in such a way that it also satisfies the target domain prior, i.e., prior enforcing. We achieve this by jointly training on labeled source samples in a supervised manner and unlabeled target samples in an unsupervised end-to-end manner by using a contradistinguish loss same as [Pandey2017UnsupervisedFL].
This fine-tunes the classifier learnt from source domain also to the target domain as demonstrated in Fig. 5 and Fig. 11. The crux of our approach is the contradistinguish loss (5) which is discussed in detail in Section 3.3. Hence, the apt name contradistinguisher for our neural network architecture.
Note that the objective of contradistinguisher is not same as a classifier, i.e., distinguishing is not same as classifying. Suppose there are two contrastive entities and , where are two classes. The aim of a classifier is to classify and , where to train a classifier one requires labeled data. On the contrary, the job of contradistinguisher is to just identify , i.e., contradistinguisher can classify (or ) and (or ) indifferently. To train contradistinguisher, we do not need any class information but only need unlabeled entities and . Using unlabeled target data, contradistinguisher is able to find a clustering scheme by distinguishing the unlabeled target domain samples in an unsupervised way. However, since the final task is classification, one would require a selective incorporation of the pre-existing informative knowledge required for the task of classification. This knowledge of assigning the label to the clusters is obtained by jointly training, thus classifying and .
3.2 Supervised Source Classification
For the labeled source domain instances , we define the conditional-likelihood of observing given as, , where denotes the parameters of contradistinguisher.
We estimateby maximizing the conditional log-likelihood of observing the labels given the labeled source domain samples. Therefore, the source domain supervised objective to maximize is given as
Alternatively, one can minimize the cross-entropy loss, as used in practical implementation, instead of maximizing (1), i.e.,
where is the softmax output of contradistinguisher that represents the probability of class for the given sample .
3.3 Unsupervised Target Classification
For the unlabeled target domain instances , as the corresponding labels are unknown, a naive way of predicting the target labels is to directly use the classifier trained only with supervised loss given in (2). While this approach may perform reasonably well in certain cases, it fails to deliver state-of-the-art performance. This may be attributed to the following reason: the support for the distribution is defined only over the source domain instances and not the target domain instances . Hence, we model a non-trivial joint distribution parameterized by the same over target domain with only the target domain instances as the support as,
However (3) is not a joint distribution yet because , i.e., marginalizing over all does not yield the target prior distribution . We modify (3) so as to include the marginalization condition. Hence, we refer to this as target domain prior enforcing.
Note that defines a non-trivial approximate of joint distribution over the target domain as a function of learnt over source domain. The resultant unsupervised maximization objective for the target domain is given by maximizing the log-probability of the joint distribution which is
Next, we discuss how the objective given in (5) is solved and the reason why (5) is referred as contradistinguish loss. Since the target labels are unknown, one needs to maximize (5) over the parameters as well as the unknown target labels . As there are two unknown variables for maximization, we follow a two step approach to maximize (5
) as analogous to Expectation Maximization (EM) algorithm[dempster1977maximum]. The two optimization steps are as follows.
Ideally, one would like to compute the third term in (7) using the complete target training data for each input sample. Since it is expensive to compute the third term over the entire for each individual sample during training, one evaluates the third term in (7) over a mini-batch. In our experiments, we have observed that mini-batch strategy does not cause any problem during training as far as it includes at least one sample from each class which is fair assumption for a reasonably large mini-batch size of . For numerical stability, we use trick to optimize third term in (7).
3.4 Adversarial Regularization
In order to prevent contradistinguisher from over-fitting to the chosen pseudo labels during the training, we use adversarial regularization. In particular, we train contradistinguisher to be confused about set of fake negative samples by maximizing the conditional log-probability over the given fake sample such that the sample belongs to all classes simultaneously. The objective of the adversarial regularization is to multi-label the fake sample (e.g., noisy image that looks like a cat and a dog) equally to all classes as labeling to any unique class introduces more noise in pseudo labels. This strategy is similar to entropy regularization [grandvalet2005semi] in the sense that instead of minimizing the entropy for the real target samples, we maximize the conditional log-probability over the fake negative samples. Therefore, we add the following maximization objective to the total contradistinguisher objective as a regularizer.
for all . As maximization of (8) is analogous to minimizing the binary cross-entropy loss (9) of a multi-class multi-label classification task, in our practical implementation, we minimize (9) for assigning labels to all the classes for every sample.
where is the softmax output of contradistinguisher which represents the probability of class for the given sample .
The fake negative samples
can be directly sampled from, say a Gaussian distribution in the input feature space
with the mean and standard deviation of the samples. For the language domain, fake samples are generated randomly as mentioned above because the input feature is the form of embeddings extracted from denoising auto-encoder with bag-of-words as the auto-encoder’s input. In case of visual datasets, as the feature space is high dimensional, the fake images are generated using a generator network with parameter
that takes Gaussian noise vectoras input to produce a fake sample , i.e., . Generator is trained by minimizing kernel MMD loss [DBLP:conf/nips/LiCCYP17], i.e., a modified version of MMD loss between the encoder output and of fake images and real target domain images respectively.
where is the Gaussian kernel.
Note that the objective of the generator is not to generate realistic images but to generate fake noisy images with mixed image attributes from the target domain. This reduces the effort of training powerful generators which is the focus in adversarial based domain adaptation approaches [DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/cvpr/LiuYFWCW18, Russo_2018_CVPR, pmlr-v80-hoffman18a, xie2018learning] used for domain alignment.
3.5 Algorithms and Complexity Analysis
mainly depends on the model complexity involving many factors such as input feature dimension, number of neural network layers, type of normalization, type of activation functions etc. Contradistinguisher is a simple network with a single encoder and classifier unlike MCD that uses a single encoder with two classifier. This makes MCD  time complexity instead of just . Similarly, SE [french2018selfensembling] uses 2 copies of network of encoder and classifier one for student and other for teacher network. This makes SE [french2018selfensembling] time complexity instead of . In general, as domain alignment approaches use additional circuitry either in-terms of multiple classifiers or GANs, the model complexity increases at least by a factor of 2. This increased model complexity requires more data augmentation to prevent over-fitting leading to further increases in time complexity at the expense of only a slight improvement, if any, compared to CUDA as indicated by our state-of-the-art results without any data augmentation in both visual and language domain adaptation tasks. We believe the trade-off achieved by the simplicity of CUDA, as evident from our results, is very desirable compared to most domain alignment approaches that use data augmentation and complex neural networks for a slight improvement, if any.
3.6 Extending to Multi-Source Domain Adaptation
We can easily extend our proposed method to perform multi-source domain adaptation. Let us suppose we are given with source domains , consisting of labeled training data and unlabeled target domain instances . We compute the source supervised loss for the source domain using (2), i.e., (1) with training data. We further compute the total multi-source supervised loss as
For our domain adaptation experiment, under real-world datasets, we consider both visual and language datasets for domain adaptation to further demonstrate the input data format independence of the proposed method. Visual datasets can be further divided into two categories, low resolution visual datasets and high resolution visual datasets. Table II provides details on the visual datasets used in our experiments. Table II provides details on the language datasets used in our experiments. We have published our python code for all the experiments at https://github.com/sobalgi/cuda, originally derived from https://github.com/gauravpandeyamu/DisCoder, for DisCoder [Pandey2017UnsupervisedFL] .
4.1 Experiments on synthetic toy-dataset using blobs
We validate our proposed method by performing experiments on synthetically created simple datasets that model different source and target domain distributions in a -dimensional input feature space using different blobs of source-target domain orientations and offsets (i.e., domain shift). We create blobs for source and target domains with 4000 samples using standard [pedregosa2011scikit] as indicated in Fig. 5 and Fig. 11. We further evenly split these 4000 data-points into equal train and test sets. Each of the splits consists the same number of samples corresponding to both the class labels.
The main motivation of the experiments on toy-dataset is to understand and visualize the behavior of the proposed method under some typical domain distribution scenarios and analyse the performance of CUDA. toy-dataset plots in Fig. 11 shows clear comparisons of the classifier decision boundaries learnt using CUDA over domain alignment approaches. The top row in Fig. 11 corresponds to domain alignment classifier trained only on the labeled source domain, i.e., . However, the bottom row in Fig. 11 corresponds to contradistinguisher trained using the proposed method CUDA with labeled source and unlabeled target domain, i.e., .
Fig. 32 demonstrates the classifier learnt using CUDA on the synthetic datasets with different complex shapes and orientations of the source and target domain distributions for the input data. Fig.s (c)c, and (a)a-(d)d indicates the simplest form of the domain adaptation tasks where are domains have similar orientations in source and target domain distributions.It is important to note that the prior enforcing used in pseudo-label selection is the reason such fine classifier boundaries are observed especially in Fig.s (d)d, and (e)e-(m)m. Fig.s (n)n-(p)p represents more complex configurations of source and target domain distributions that indicate the hyperbolic decision boundaries jointly learnt on both the domains simultaneously using a single classifier without explicit domain alignment. Similarly, Fig. (q)q represents a complex configuration of source and target domain distributions that indicate the elliptical decision boundary.
4.2 Experimental Setup and Datasets
Details of language dataset (Amazon customer reviews for sentiment analysis).
|Domain||# Train||# Test|
|Kitchen Appliances ()||2,000||5,945|
|SE [french2018selfensembling] (requires data augumentation)||99.54||98.26||99.26||97.00||80.09||74.24||97.11||99.37|
|DIRT-T [shu2018a] (requires data augumentation)||-||-||99.40||54.50||-||73.30||96.20||99.60|
|ACAL [hosseini-asl2018augmented] (requires data augumentation)||97.16||98.31||96.51||60.85||-||-||97.98||-|
|Rozantsev et. al. [rozantsev2018beyond]||75.5||75.8||55.7||96.7||57.6||99.6||76.8|
|(Ours) (fine-tune ResNet-50)||41.0||38.7||23.2||80.6||25.6||94.2||50.6|
|(Ours) (fixed ResNet-50)||82.0||77.9||68.4||97.2||67.1||100.0||82.1|
|(Ours) (fixed ResNet-50)||95.0||93.8||71.5||98.9||73.3||99.4||88.7|
|(Ours) (fixed ResNet-50)||96.0||95.6||69.5||99.1||70.7||100.0||88.5|
|(Ours) (fixed ResNet-50)||92.8||91.6||72.5||98.4||72.8||99.8||88.0|
|(Ours) (fixed ResNet-50)||91.8||95.6||73.2||98.0||74.7||100.0||88.9|
|(Ours) (fixed ResNet-152)||84.9||82.8||70.3||98.2||71.1||100.0||84.6|
|(Ours) (fixed ResNet-152)||97.0||94.3||73.9||99.0||75.5||100.0||90.0|
|(Ours) (fixed ResNet-152)||95.6||95.6||73.8||98.7||74.3||100.0||89.7|
|(Ours) (fixed ResNet-152)||97.0||97.4||76.0||98.6||75.1||99.8||90.7|
|(Ours) (fixed ResNet-152)||95.4||98.5||75.0||98.9||76.0||100.0||90.6|
|Best single source||DAN [DBLP:conf/icml/LongC0J15]||97.1||63.6||99.6||86.8|
|Rozantsev et. al. [rozantsev2018beyond]||96.7||57.6||99.6||84.6|
For our domain adaptation experiment, we consider both synthetic and real-world datasets. Under synthetic datasets, we experiment using 2D blobs with different source and target domain probability distributions to demonstrate the effectiveness of the proposed method under different domain shifts. Under real-world datasets, we consider both visual and language datasets for domain adaptation to further demonstrate the input data format independence of the proposed method. Visual datasets can be further divided into categories, low resolution visual datasets and high resolution visual datasets. Table II provides details on the visual datasets used in our experiments. Table II provides details on the language datasets used in our experiments.
4.2.1 Low Resolution Visual Datasets
In low resolution visual experiments, we consider eight benchmark visual datasets with three different nature of images: Digits, Objects and Traffic Signs. These low resolution visual experiments are grouped as one set because all these datasets have low resolution images with a generally large number of training samples. Due to these two reasons, there is no need to use any pre-trained networks and entire setup can be trained from scratch using large number of training samples from source and target domains combined.
We use the same neural network architecture as used in SE [french2018selfensembling] without any data augmentation for low resolution visual datasets. The networks are trained from scratch as the number of training samples were high relative to high resolution visual datasets where we use pre-trained networks to extract features. We try to use the same hyper-parameters as used in SE [french2018selfensembling] in order to demonstrate the effectiveness of the proposed approach with minor modifications if necessary. Note that we do not perform any image data augmentation in our experiments unlike [french2018selfensembling]. Our aim in this paper is to demonstrate that the proposed method performs above/on-par the standard domain alignment methods without data augmentation as data augmentation is expensive and not always possible as seen in language tasks. We show that even without any domain specific centering or data augmentation, we still achieve the best results as the contradistinguish loss is able to classify directly on target domain by learning the most contrastive features in that domain.
4.2.2 High Resolution Visual Datasets
In high resolution visual datasets, we consider Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset for our experiments. Unlike low resolution visual datasets, here we have only few hundreds of training samples which makes this an even more challenging task.
Office objects: Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset consists of high resolution images of objects belonging to 31 classes obtained from three different domains AMAZON (), DSLR (), and WEBCAM (). Fig. 37 shows illustrations of the images from all the three above mentioned domains of the Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset. AMAZON () domain consists of synthetic images with clear white background. DSLR () and WEBCAM () domains consist of real images with noisy background and surroundings. We consider all possible six combinatorial tasks of domain adaptation involving all the three domains, i.e., , and . Compared to low resolution visual datasets, Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset domain adaptation tasks have increased complexity due to the small number of training images.
To alleviate the lack of large number of training samples, pre-trained networks such as ResNet-50 [he2016deep] and ResNet-152 [he2016deep] were used to extract 2048 dimensional features from high resolution images similar to CDAN [NIPS2018_7436]. Since the images are not well centered and have a high resolution, we use the standard ten-crop of the image to extract features from the same images during training and testing, also similar to CDAN [NIPS2018_7436].
The use of pre-trained models leads to two choices of training, (i) Fine-tune the pre-trained model used as feature extractor along with the final classifier layer: This requires careful selection of several hyper-parameters such as learning rate, learning rate decay, batch size etc. to fine-tune the network to the current dataset while preserving the ability of the pre-trained network. We observed that fine-tuning also depends on the loss function used for training [DBLP:conf/iclr/JacobsenBZB19], which in our case the use of contradistinguish loss greatly affected the changes in the pre-trained model as it is trained only using cross-entropy loss. Fine-tuning also computationally expensive and time-consuming as each iteration requires computing gradients of all the parameters of the pre-trained model. (ii) Fix the pre-trained model and only train the final classifier layer: Alternative to fine-tuning is to fix the pre-trained model and use it only as a feature extractor. This approach has multiple benefits such as, (a) The computational time and cost of the fine-tuning the parameters of pre-trained model is alleviated. (b) Since the extractor is fixed, it requires only once to extractor and store the features locally instead of extracting the same features every iteration. Hence reducing the training time as it is only required to train the classifier.
4.2.3 Language Datasets
We consider four benchmark language domains (i) Books (), (ii) DVDs (), (iii) Electronics (), and (iv) Kitchen Appliances () from the Amazon customer reviews [blitzer2006domain] dataset. The dataset includes product reviews from four different domains labeled for sentiment analysis task as indicated in Table II.
On these domains, we consider all twelve combinations of domain adaptation tasks studied in [ganin2016domain, DBLP:journals/corr/LouizosSLWZ15, 2017arXiv170208811Z, pmlr-v70-saito17a, DBLP:conf/acl/PlankR18]. We use the same neural networks and text pre-processing used in [Chen:2012:MDA:3042573.3042781, ganin2016domain, DBLP:conf/acl/PlankR18]
to get 5000 dimensional feature vector using marginalizing Stacked Linear Denoising Autoencoders (mSLDA)[chen2015marginalizing], an improvement over vanilla Stacked Denoising Autoencoder (SDA) [glorot2011domain]. We assign binary label ‘0’ for the reviews rated stars and ‘1’ for the reviews rated star ratings.
We select the best existing neural networks without major modifications to hyper-parameters so as to demonstrate the effectiveness of CUDA. All the experiments were done using PyTorch[paszke2017automatic] with mini-batch size of 64 per GPU distributed over four GPUs, Adam optimizer with an initial learning rate and decay rate of
every 30 epochs was used.
4.3 Experimental Results
We use the same metric used for evaluation as in [pmlr-v37-ganin15, NIPS2016_6544, 10.1007/978-3-319-46493-0_36, pmlr-v70-saito17a, DBLP:conf/iccv/HausserFMC17, 8099799, DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/cvpr/LiuYFWCW18, 8578490, Russo_2018_CVPR, pmlr-v80-hoffman18a, xie2018learning, NIPS2018_7436, DBLP:conf/aaai/ChenCJJ19, french2018selfensembling, shu2018a, hosseini-asl2018augmented, ganin2016domain, DBLP:journals/corr/LouizosSLWZ15, 2017arXiv170208811Z, DBLP:conf/acl/PlankR18, Bousmalis:2016:DSN:3157096.3157135], i.e., the accuracy on target domain test set for the low resolution visual dataset experiments. Table III indicates the target domain test accuracy across all the eight low resolution visual domain adaptation tasks described earlier compared with several state-of-the-art domain alignment methods [pmlr-v37-ganin15, NIPS2016_6544, 10.1007/978-3-319-46493-0_36, pmlr-v70-saito17a, DBLP:conf/iccv/HausserFMC17, 8099799, DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/cvpr/LiuYFWCW18, 8578490, Russo_2018_CVPR, pmlr-v80-hoffman18a, xie2018learning, NIPS2018_7436, DBLP:conf/aaai/ChenCJJ19, french2018selfensembling, shu2018a, hosseini-asl2018augmented, Bousmalis:2016:DSN:3157096.3157135]. In contrast to low resolution visual datasets, high resolution Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset does not have separate pre-defined train and test splits. Since we do not use any labels from the target domain during training, we report ten-crop test accuracy on the target domain by summing the softmax values of all the ten crops of the image and assign the label with maximum aggregate softmax value for the given image as in CDAN [NIPS2018_7436] in Table IV. In Table V, we report the target domain accuracy similar to Table IV in a multi-source domain adaptation setting by combining two domains into a single labeled source domain and the remaining domain as the unlabeled target domain. Table VI indicates the target domain test accuracy across all the twelve language domain adaptation tasks compared with different state-of-the-art methods [ganin2016domain, DBLP:journals/corr/LouizosSLWZ15, 2017arXiv170208811Z, pmlr-v70-saito17a, DBLP:conf/acl/PlankR18].
Apart from the standard domain alignment methods used for comparison, we report the performance of two baselines and of our own, in Tables III-VI, by fixing the contradistinguisher neural network architecture and varying only the training losses. involves training contradistinguisher using only the source domain in a fully supervised way. involves training contradistinguisher using only the target domain in a fully supervised way. and respectively indicates the minimum and maximum target domain test accuracy that can be attained with chosen contradistinguisher neural network. Comparing CUDA with in Tables III-VI, we can see huge improvements in the target domain test accuracies due to the use of contradistinguish loss (5) demonstrating the effectiveness of contradistinguisher.As our method is mainly dependent on the contradistinguish loss (5), by experimenting with better neural networks along with our contradistinguish loss (5), we observed improved results over neural networks of [8099799, 8578490] on low resolution visual experiments. We used ResNet with 4 fully connected layer over AlexNet on high resolution visual experiments and Multinomial Adversarial Network (MAN) [DBLP:conf/naacl/ChenC18] on language experiments.
4.4 Analysis of Experimental Results
4.4.1 Low Resolution Visual Experimental Results
In tasks and , the performance of is less than CUDA in Table III. is poor because causing over-fitting when only target domain supervised loss is used. The improved results of CUDA indicates that contradistinguisher is able contradistinguish on the target domain along with the transfer of informative knowledge required for the classification from a larger source domain. This indicates that contradistinguisher is indeed successful in contradistinguishing on a relatively small set of unlabeled target domain using larger source domain information.
Another interesting observation is that in the task , is slightly better than CUDA. This is due to slight over-fitting on the target domain training examples which are actually non-informative for classification leading to a small decrease in the target domain test accuracy. outperforms in certain tasks indicating that source domain has more information than target domain due to large source and small target training sets.
Fig.s (a)a-(d)d shows t-SNE [vandermaaten2008visualizing] plots for embeddings from the output of contradistinguisher before applying softmax corresponding to the test samples from as the contradistinguisher training progresses using CUDA. We indicate these plots as this is the most difficult among all the visual experiments due to contrasting domains.Fig.s (a)a-(h)h shows t-SNE [vandermaaten2008visualizing] plots for embeddings from the output of contradistinguisher before applying softmax corresponding to the test samples in low resolution visual experiments and they show clear class-wise clustering on both source and target domains indicating the efficacy of CUDA.
As an ablation study, keeping the neural network and all the hyper-parameters same, we investigated the effect of each of the loss functions, i.e., source supervised loss (1), target unsupervised loss (5) and target adversarial loss (9) and reported the same in Table III. Since the contradistinguish loss (5) only requires unlabeled input, we observed that using the source domain without labels as an additional loss unsupervised loss (5) only complements in the contradistinguisher performance. An important observation to be made is that when source adversarial loss (9) is used alone without target adversarial loss (9), it always leads to a decrease in performance in the target domain test accuracy. An explanation for this behavior is that an adversarial input in source domain might be a real input in target domain. So assigning such an input to all the classes indifferently sometime would lead to additional noise in pseudo-labels. It should be noted that , i.e., source domain supervised loss and target domain contradistinguish loss always improves on which is source domain supervised loss only. This indicates the efficacy of the target domain unsupervised contradistinguish loss (5) in the proposed approach CUDA.
4.4.2 High Resolution Visual Experimental Results
We report the standard ten-crop accuracy on the target domain images as reported by several state-of-the-art domain adaptation methods [NIPS2018_7436, DBLP:conf/cvpr/Sankaranarayanan18a, DBLP:conf/icml/LongZ0J17]. Since there are no explicit test split specified in the dataset and no labels are used from the target domain during training, it is common to report ten-crop accuracy considering the whole target domain.
In Table IV, we report accuracies obtained by fine-tuning ResNet-50 using the learning rate scheduling followed in CDAN [NIPS2018_7436] and also without fine-tuning ResNet-50. Fig.s (a)a-(f)f indicate the t-SNE plots of the softmax output after aggregating the ten-crop of each image corresponding to training configuration reported in Table IV. Apart from fixed ResNet-50, we also report accuracies with fixed ResNet-152 in Table IV for comparison. Fig.s (g)g-(l)l indicate the t-SNE plots of the softmax output after aggregating the ten-crop of each image corresponding to training configuration reported in Table IV. Fig. 64 reports the t-SNE plots of the training setting using ResNet-50 and ResNet-152 encoder with the highest mean accuracy of all the six domain adaptation tasks. We clearly observe that CUDA outperforms several state-of-the-art methods that also use ResNet-50 and even surpasses further using ResNet-152 encoder with CUDA.
Among the three domains in Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset, can be considered as well curated synthetic dataset with clear background and as uncurated real-world dataset with noisy background and surroundings. We report the six domain adaptation tasks in the order of their complexity from low to high as, (i) Fig.s (c)c,(f)f,(i)i and (l)l indicate highest accuracies because of similar real-world to real-world domain adaptation task, (ii) Fig.s (a)a,(b)b,(g)g and (h)h indicate moderately high accuracies because of synthetic to real-world domain adaptation task, and (iii) Fig.s (d)d,(e)e,(j)j and (k)k indicate lowest accuracies among all the six tasks because of real-world to synthetic domain adaptation task. Fig. 71 reiterates the above observations involving synthetic and real-world domains. mDA-layer [mancini2018boosting, 8792192] report the target domain accuracy after unifying the remaining domains as a single source domain. This is an easier task than ours because having at least one real-world domain as source boost the performance heavily as indicated in Fig.s (c)c,(f)f,(i)i and (l)l. Even in this multi-source domain setting, CUDA outperforms [mancini2018boosting, 8792192].
We also extend the experiments to multi-source domain adaptation on the Office-31 [DBLP:conf/eccv/SaenkoKFD10] dataset. In Table V, we can clearly observe that in task, multi-source domain adaptation provides better results than their respective best single source domain adaptation experiments. However in case of