Log In Sign Up

SALT: Subspace Alignment as an Auxiliary Learning Task for Domain Adaptation

Unsupervised domain adaptation aims to transfer and adapt knowledge learned from a labeled source domain to an unlabeled target domain. Key components of unsupervised domain adaptation include: (a) maximizing performance on the source, and (b) aligning the source and target domains. Traditionally, these tasks have either been considered as separate, or assumed to be implicitly addressed together with high-capacity feature extractors. In this paper, we advance a third broad approach; which we term SALT. The core idea is to consider alignment as an auxiliary task to the primary task of maximizing performance on the source. The auxiliary task is made rather simple by assuming a tractable data geometry in the form of subspaces. We synergistically allow certain parameters derived from the closed-form auxiliary solution, to be affected by gradients from the primary task. The proposed approach represents a unique fusion of geometric and model-based alignment with gradient-flows from a data-driven primary task. SALT is simple, rooted in theory, and outperforms state-of-the-art on multiple standard benchmarks.


page 1

page 2

page 3

page 4


Co-regularized Alignment for Unsupervised Domain Adaptation

Deep neural networks, trained with large amount of labeled data, can fai...

Revisiting Deep Subspace Alignment for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) aims to transfer and adapt knowledg...

Unsupervised Domain Adaptation through Self-Supervision

This paper addresses unsupervised domain adaptation, the setting where l...

Unsupervised Domain Adaptation by Uncertain Feature Alignment

Unsupervised domain adaptation (UDA) deals with the adaptation of models...

Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain Adaptation

In this work, we face the problem of unsupervised domain adaptation with...

Geometry-Aware Unsupervised Domain Adaptation

Unsupervised Domain Adaptation (UDA) aims to transfer the knowledge from...

Unsupervised Domain Adaptation via Structurally Regularized Deep Clustering

Unsupervised domain adaptation (UDA) is to make predictions for unlabele...

1 Introduction

Despite significant advances in neural network architectures and optimization strategies for supervised learning, one of the long-standing challenges has been to effectively generalize classifier models to novel testing scenarios, typically characterized by unknown covariate shifts 

Hoffman et al. [2013], changes in label distributions, or oblivious corruptions. In this paper, we consider the problem of unsupervised domain adaptation, wherein the goal is to utilize labeled data from a source domain to design a classifier that can generalize to an unlabeled target domain. We are especially interested in the case when no knowledge about the covariate shift is available.

Earlier approaches for unsupervised domain adaption, particularly in visual recognition, were based on countering the effects of distributional shifts by exploiting low-dimensional structures in data  Fernando et al. [2013], Gong et al. [2012], Shrivastava et al. [2014], Thopalli et al. [2019]. In other words, achieving domain invariance was posed as learning a mapping between simplified data representations, e.g. linear subspaces. Some of the key ideas here include not explicitly inferring the hypothesis that minimizes the generalization error, and reliance on simplifying assumptions on data geometry (e.g. single linear subspace for the entire dataset). Due to these reasons, these methods have fallen behind more recent approaches in terms of performance.

The foundational work of Ben-David et. al. Ben-David et al. [2010] established an upper bound for target-error, on target data , that can be achieved using a hypothesis as the sum of three terms:


where, the first term denotes the error in the source domain , the second term is the discrepancy between the source-target pair (-divergence), and the third term measures the optimal error achievable in both the domains (often assumed to be negligible). Under this context, there are two broad categories of methods – ones that assume there exists a single hypothesis that can perform well in both domains (conservative), and those that do not make that assumption (non-conservativeShu et al. [2018].

More recent solutions for domain adaptation attempt to infer domain-invariant data representations by minimizing the discrepancy between feature distributions from the two domains. In particular, domain adversarial learning, which seeks to find a common representation where the two domains are indistinguishable, is at the core of several state-of-the-art methods Tzeng et al. [2017], Hoffman et al. [2018], Long et al. [2018], and. V. Lempitsky. Y , GaninY et al. . However, it has recently been shown that domain adversarial training can be ineffective when working with a high-capacity feature extractor Shu et al. [2018]. High-capacity networks allow for learning arbitrary transformations that can reduce domain mismatch (in terms of feature distributions), yet might have no bearing on the final classifier performance Shu et al. [2018].

The non-trivial interaction between the terms in (1) has motivated the inclusion of a variety of consistency-enforcing losses into the domain adversarial learning formulation. For example,  GaninY et al. , Tzeng et al. [2017] employ both feature and semantic losses for feature-level adaptation, while Liu and Tuzel [2016], Bousmalis et al. [2017] perform pixel-level adaptation via pixel and semantic consistency losses. More recently, Hoffman et al. Hoffman et al. [2018] proposed to enforce cyclical consistency based on all the aforementioned losses, while Shu et al. Shu et al. [2018] introduced a virtual adversarial loss to better regularize domain adversarial learning.

Key insights: The above discussion leads us to our core idea that one must try to blend the representational convenience of simplified data geometries, while not being constrained by analytic solutions for alignment. Analytic solutions for alignment while powerful, can cause error due to geometry mismatch to propagate downstream. We strike a balance between the following factors: a) assume tractable data geometries in source and target domains, which can be analytically leveraged for data alignment, b) synergistically adapt certain parameters derived from the analytic solution to alignment, in a manner that maximizes performance on the primary task of classification. This approach can be seen as inspired by meta-learning Finn et al. [2017], specifically designed for handling interactions between domain alignment and hypothesis inference.

Contributions and findings: In this paper, we leverage the observation that explicit domain alignment behaves more as an auxiliary task, whose fidelity can be carefully adjusted to maximize the quality of the primary task, i.e., performance of the classifier on both source and target domains. This approach can be said to fall under the category of non-conservative adaptation, and hence we include explicit information-invariance losses for the unlabeled target domain, similar to Shu et al. [2018]. We make the following major findings:

  • [noitemsep]

  • With a disjoint primary-auxiliary formulation, we find that even a naïve global subspace based alignment  Fernando et al. [2013] with a fixed feature extractor, achieves higher or similar performance compared to state-of-the-art approaches on several benchmarks.

  • Moving from here, we define adaptable subspace alignment as the auxiliary task, which uses gradients from the primary task, to adjust the domain alignment. This is seen to improve performance much more significantly across benchmarks.

  • In summary, our findings show that by viewing domain alignment as an auxiliary task, we are able to entirely dispense the need for adversarial learning, consistency-enforcing regularizers, and other extensive hyper-parameter choices.

Broader interpretation: Our results find additional corroboration from analogous findings in Liu et al. [2019]

, where meta-learning style optimization is used to automatically construct an auxiliary classification task so as to provide additional pseudo-supervisory guidance to the primary task of building a classifier. Viewed under the lens of meta-learning, our results indicate that, atleast for visual recognition, a single global domain alignment is sufficient, when coupled with an appropriately chosen primary task. While the proposed approach is highly effective in generalizing classifiers under covariate shifts, its effectiveness in other adaptation tasks such as image-to-image translation remains to be studied.

2 Related work

In this section, we briefly review the prior art in unsupervised domain adaptation. Furthermore, we will also discuss about meta auxiliary learning, which is closely related to the proposed approach.

Unsupervised Domain Adaptation: Unsupervised domain adaptation has been an important problem of research in multiple application areas and a wide variety of solutions have been developed. Earlier works such as Saenko et al. [2010], GongB et al. , Pan et al. [2010], Sun et al. [2017], Fernando et al. [2014], Sun and Saenko [2015] focused on adapting the features of source and target domains by minimizing a notion of statistical divergence between them. These works can be analyzed through the work of Ben-David et al. [2010], which provides an upper bound on target error in (1

). Building upon this intuition, successful state-of-the art methods use powerful feature extractors such as convolutional neural networks (CNNs), and aim to jointly minimize source error along with domain divergence error. Adversarial learning 

GoodfellowI et al. has been the workhorse of these solutions, implemented with different additional regularizers GaninY et al. , Long et al. [2018], Hoffman et al. [2018], Liu et al. [2017], IsolaP et al. .

Subspace-based Alignment: The key idea behind this class of methods is to compute lower dimensional subspaces of source and target, align them and subsequently project the ambient data onto the aligned subspace. A classifier is finally trained on the newly computed lower dimensional source data and evaluated on target data. The most relevant works for our approach are Gong et al. [2012], Gopalan et al. [2011], Fernando et al. [2013], Sun and Saenko [2015]. Geodesic-based methods Gopalan et al. [2011], Gong et al. [2012] compute a path along the manifold of subspaces (Grassmannian), and either project the source and target onto points along that path Gopalan et al. [2011] or compute a linear map that projects source samples directly onto the target subspace Gong et al. [2012]. Furthermore, works such as Fernando et al. [2013], Sun and Saenko [2015] align the source and target subspaces by finding an affine transformation that decreases the Frobenius norm between them Fernando et al. [2013], or by considering distributional statistics along with subspace basis Sun and Saenko [2015].

Meta Auxiliary Learning: Meta-learning has been a recently successful approach in generalizing knowledge across related tasks Finn et al. [2017]. Broadly, meta-learning techniques can be grouped into three categories Finn et al. [2017] – metric-based Koch et al. [2015], Vinyals et al. [2016], model-based Santoro et al. [2016], Munkhdalai and Yu [2017] and optimization-based Finn et al. [2017], Ravi and Larochelle [2017]. Auxiliary learning on the other hand essentially focuses on increasing the performance of a primary task through the help of another related auxiliary task(s). This methodology has been applied to areas such as speech recognition Toshniwal et al. [2017]

, depth estimation, semantic segmentation 

Liebel and Körner [2018]

, and reinforcement learning 

Jaderberg et al. [2017]. The work closely related to ours is meta-auxiliary learning Liu et al. [2019], which aims to improve -class image classification performance (primary task) by solving a -class classification problem (auxiliary task). This is done by establishing a functional relationship between the classes. In contrast, we formulate subspace-based domain alignment as the auxiliary to the primary task of achieving a generalizable classifier that works well in both source and target domains.

3 Proposed Approach

In this section, we describe the proposed method for unsupervised domain adaptation. An overview of the approach can be found in Figure 1. Given data from the labeled source and unlabeled target domains, and denoted as and respectively, our algorithm progresses by iteratively updating the primary and auxiliary networks. In the rest of this paper, we use to indicate the latent features for source and target domains from a pre-trained feature extractor, , such as ResNet50 HeK et al. . The primary network updates the classifier, given the source and source-aligned target features, such that the inferred model is effective for both source and target domains. The auxiliary network solves for subspace-based domain alignment, by leveraging the loss from the primary network. The resulting alignment is sub-optimal in terms of the pure alignment cost, but results in an optimal alignment conditioned on the primary classification task.

Figure 1: An overview of the proposed approach for unsupervised domain adaptation. We leverage gradients from the primary task of designing a generalizable classifier to guide the domain alignment, which is posed as an auxiliary task. While the primary task utilizes deep neural networks, the auxiliary task is carried out using a simplified data geometry – subspaces – in lieu of adversarial training or sophisticated distribution matching. Note, even the feature extractor is frozen after an initial training phase.

3.1 Primary Task: Classifier Design

We construct the primary task with the goal of achieving effective class discrimination in both source and target domains. With inputs as source/target images directly, or latent features extracted from a pre-trained feature extractor

, we learn the parameters for a classifier network parameterized by . The losses used for the optimization include: (i) standard categorical cross-entropy loss for the labeled source data, (ii) conditional entropy Shu et al. [2018] loss on the softmax predictions for target data, (iii) class-balance loss French et al. [2018] for the unlabeled target domain. Note, the second and third loss terms are used as regularizers to counter the assumption that a single hypothesis might not be effective for both domains, i.e. non-conservative. In its simplest form, this formulation should work if there is no covariate shift between the domains. However, in our setup, in order to account for unknown shifts (if they exist), we formulate an auxiliary task for domain alignment. Formally, let , represent the cross entropy loss on the source, and conditional entropy loss on the target respectively, i.e.


Let  French et al. [2018]

denote the class balance loss, implemented as binary cross-entropy loss between the mean of predictions of the network over a mini-batch to that of a uniform probability vector – this loss regularizes network behavior when the data exhibits large class imbalance. The overall loss function is thus defined as


3.2 Auxiliary Task: Domain Alignment

We posit that a meta-learning style training between a generalizable classification task, and an auxiliary domain alignment task, relaxes the requirements of the alignment step such that even simple alignment strategies can provide sufficient information to improve the classifier. In order to test this idea, we assume a simplified data geometry, in the form of low-dimensional linear subspaces Fernando et al. [2013]. Note that, as a generative model for a dataset, a single linear subspace or even a union of linear subspaces is a poor choice on its own. However, when coupled with an appropriate primary task using a sufficiently high capacity classifier, we will show it can be highly effective in domain adaptation.

Formulation: Let us denote the basis vectors for the -dimensional subspaces inferred from source and target domains as and

respectively. The subspaces are inferred using singular value decomposition of the features

from the feature extractor . The alignment between two subspaces can be parameterized as an affine transformation , i.e.,


where, denotes the Frobenius norm. The solution to (3) can be obtained in closed-form Fernando et al. [2013] as . This implies that the adjusted coordinate system, also referred as the source-aligned target subspace can be constructed as


Since the primary task invokes the classifier optimization using features in the ambient space, we need to re-project the target features using , i.e.,


where denotes the modified target features. The solution to this optimization problem can be obtained in closed-form as , where, and is computed from (3). When the alignment loss is linearly combined with the primary task objective, there exists no closed-form solution and the objective function becomes non-convex. We will construct an approach that takes in gradients from the primary task to adjust .

3.3 Algorithm

Given the primary and auxiliary task formulations, we can adopt different training strategies to combine their estimates: (i) Independent: This is the classical subspace alignment strategy, where the aligned features are directly used to optimize the classifier parameters, (ii) Joint: Similar to domain adversarial training methods, we can jointly optimize for both steps together, (iii) Alternating: This meta-learning style optimization solves for the primary task with the current estimate of the alignment, and subsequently updates the auxiliary network with both primary and auxiliary losses. As we will show later, the meta optimization strategy works the best in comparison to the other two. Now, we describe the algorithm for the proposed approach in detail.

Initialization phase: Before applying the proposed meta-optimization strategy, we need to initialize the parameters for both the primary and auxiliary tasks. First, we pre-train the feature extractor and the classifier using the losses described in section 3.1, without any explicit domain alignment. In the experiments section, we refer to this initialization as no adaptation. We then fit -dimensional subspaces, and , to the features obtained using for both the source and target domains. Note that the feature extractor is not updated for the rest of the training process, and hence the subspaces are fixed. The alignment matrix between the two subspaces is obtained using equation (3).

Training phase: In order to enable information flow between the two tasks, we propose to allow the auxiliary task to utilize gradients from the primary task. Similarly, the estimated alignment is applied to the target data while updating the classifier parameters in the primary task. To enable this flow, we define a subspace alignment network, that parameterizes as a linear layer with neurons. This parameterization allows to directly solve equation (3), when the losses from the primary task are taken into consideration. The primary and auxiliary tasks are solved alternatively until convergence – during the auxiliary task optimization, we freeze the classifier parameters and use the source/target losses from along with the alignment cost, in order to update . Since the feature extractor is fixed, there is no need to recompute the subspaces. It is important to note that, similar to existing meta-learning strategies Finn et al. [2017], the auxiliary task is optimized using a held-out validation set, distinct from that used for the primary task. We find this critical to the effective convergence of our algorithm. Upon estimation of an updated , the classifier network is refined using source features and source-aligned target features obtained with the new alignment. Upon convergence (typically within iterations on all datasets considered), optimal values for both and are returned. Following the model-agnostic meta learning (MAML), we could perform the meta optimization using gradients-through-gradients. However, even without that, our approach produces highly effective generalization on all benchmark datasets.

Using multiple subspaces: The fidelity of the auxiliary task relies directly on the quality of the subspace approximation. For complex datasets, a single low-dimensional subspace is often a poor approximation. Hence, we propose to allow the complexity of the auxiliary model to be adjusted by using multiple target subspaces. To this end, we obtain independent bootstraps of the target data and fit a single low-dimensional subspace of dimension to each of them. While solving for the auxiliary task, we compute individual alignment matrices to the source with respect to the same classifier . During the update of the classifier , we pose this as a multi-task learning problem, wherein a single classifier is used with different source-aligned targets. This is valid since all (bootstrapped) subspaces are in the same ambient feature space. During test time, we treat the predictions obtained using features from different alignment matrices as an ensemble and perform majority voting.

4 Experiments

We evaluate the proposed method on four widely used visual domain adaptation tasks – digits, ImageCLEF, VisDA-2017 challenge and Office-Home datasets, and present comparisons to several state-of-the-art domain adaptation techniques. Across all the experiments, an 80-20 random split of source and target training data is performed to update the primary and auxiliary tasks. All experiments were run using PyTorch framework  

Paszke et al. [2017] on a Nvidia-TitanX GPU based computer.

4.1 ImageCLEF-DA

Dataset: ImageCLEF111 is organized by selecting common categories of images shared by three public image datasets (domains): ImageNet ILSVRC 2012 (I), Caltech-256 (C), and Pascal VOC 2012 (P). There are categories, with images each, resulting in a total of images in each domain. We conduct experiments by permuting the domains : I P, P I, I C, C I, C P, P C.

Model: Our feature extractor is based on the pre-trained ResNet-50 architecture HeK et al. , Russakovsky et al. [2015]. This model is fine-tuned using the strategy in Section 3.1 with and set at . We then use SALT on the latent features from the penultimate layer of the fine-tuned ResNet. Source and target subspaces of dimension are constructed from these -dimensional features using SVD. The classifier network is chosen to be the last fully connected layer, subsequently refined with a learning rate of 1e-4 using SGD optimizer with a momentum of 0.9. The subspace alignment network is trained with a learning rate of 1e-3 using the Adam optimizer Kingma and Ba [2014]. The proposed approach is compared against a number of baseline methods including Long et al. [2018], LongM et al. , GaninY et al. , Long et al. [2017] and the results are reported in table 1. The results clearly show that even a naïve alignment strategy can produce improved performance over sophisticated adversarial learning methods, with SALT’s alternating optimization strategy.

I P P I I C C I C P P C Average
No Adaptation 76.5 88.2 93 84.3 69.1 91.2 83.7
DAN LongM et al. 74.5 82.2 92.8 86.3 69.2 89.8 82.5
DANN GaninY et al. 75.0 86.0 96.2 87.0 74.3 91.5 85.0
JAN Long et al. [2017] 76.8 88.0 94.7 89.5 74.2 91.7 85.8

CDAN+E Long et al. [2018]
78 90.9 98.1 91.6 74.4 94.6 87.9
SALT 79.8 95.5 97.3 90.9 79.3 97 90.0

Table 1: Classification accuracy on the ImageCLEF dataset. Best performance is shown in bold, and the second best in bold italic.

Ablation Study: In order to understand the impact of the different components, we perform an ablation study on this dataset. We describe each setting in this experiment next:

  • [noitemsep]

  • No Adaptation: A baseline method where we use the classifier trained on the source directly on the target features without any adaptation.

  • Primary Only: We leave out the auxiliary task, but include all the losses used in the primary task described in equation (2).

  • Independent: Here, we use the closed form solution in subspace alignment from equation (4), and then solve for the primary task independently.

  • Joint Optimization: We employ a joint optimization strategy, wherein we jointly update the alignment , and the classifier together.

  • Alternating Optimization: This is our proposed strategy that updates and the classifier in an alternating fashion.

The results from the study are illustrated in figure 2(a). A key observation is that, since the alignment strategy is weak, when done independently it does not lead to any performance gains. However, the proposed optimization provides significant improvement over even a joint optimization strategy.

4.2 Digits classification

Datasets: We consider three data sources (domains) for the digits classification task: USPS Hull [1994], MNIST LeCun et al. [2010], and the Street View House Numbers (SVHN) Netzer et al. [2011] dataset. Each of these datasets have 10 categories (digits from 0-9). The USPS dataset contains training and test grayscale images of handwritten images, each one of size pixels. The MNIST dataset contains training and testing grayscale images of size . The SVHN dataset contains house numbers extracted from Google Street View images. This dataset contains training images, and testing images of size . We perform the following three experiments in this task. a) MNIST USPS, b) USPS MNIST, and c) SVHN MNIST and report the accuracies on the standard target test sets

(a) Ablation study
(b) Using multiple target subspaces
Figure 2: (a) Ablating different components in the proposed method against adaptation performance on the ImageCLEF dataset. See text in sec 4.1 for notation. (b) Effect of using multiple target subspaces in SALT on the SVHN-MNIST DA task.

No Adaptation 94.8 49 60.7

DeepCoRAL Sun and Saenko [2016]
89.3 91.5 59.6
MMD Long et al. [2015] 88.5 73.5 64.8
DANN GaninY et al. 95.7 90.0 70.8
ADDA Tzeng et al. [2017] 92.4 93.8 76.0
DeepJdot Bhushan Damodaran et al. [2018] 95.6 96.0 96.7
CyCADA Hoffman et al. [2018] 95.6 96.5 90.9
UNIT Liu et al. [2017] 95.9 93.5 90.5
GenToAdapt Sankaranarayanan et al. [2018] 95.3 90.8 92.4
SALT 96.2 96.7 95.6
(a) Digits datastets
Method Average Accuracy
No Adaptation 54.2
JAN Long et al. [2017] 61.6
CDAN Long et al. [2018] 70.2
SALT 76.3

(b) VISDA-2017
Table 2: Performance of the proposed method on VISDA and Digits datasets. We highlight the best performing technique in bold, and the second best in bold italic.

Model: The model used for all the tasks is based on the architecture from Bhushan Damodaran et al. [2018]. The model consists of six convolutional layers containing

filters; with ReLU activations and two fully-connected layers of

and (number of classes) hidden units. The Adam optimizer () was used to update the model using a mini-batch size of for the two domains. We compare our results with a number of state-of-the-art domain adaptation methods and the results are shown in table (a)a.

SALT achieves higher accuracy than the others in two out of three experiments. In the third experiment, we are close to the best performing DeepJdot Bhushan Damodaran et al. [2018]. With one of the tasks in this dataset, we demonstrate the effect of using multiple subspaces on the classification performance. As discussed earlier, allowing multiple target subspaces increases the complexity of the auxiliary task. As showed in figure 2(b), with the SVHN-MNIST DA task using 3 or more subspaces leads to significant performance gains. However, we found that increasing it further did not lead to additional improvements.

4.3 VisDA-2017

Dataset: VisDA-2017 is a difficult simulation-to-realworld dataset, with two highly distinct domains: Synthetic, renderings of 3D models from different angles and with different lightning conditions; Real which are natural images. This dataset contains over 280K images across 12 classes.

Model: Owing to this dataset’s complexity we choose ResNet-152 HeK et al. as our feature extractor and as in previous case, we fine tune it to obtain the -dimensional features and the subspace dimension is chosen to be . The classifier and subspace alignment network are trained with the same hyper-parameters as in section 4.1. From table (b)b, it can be clearly seen that our model comprehensively outperforms the results reported so far in the literature.

(a) No adaptation
(b) After adaptation
Figure 3: VisDA-2017 - Visualizing the adaptation across source and target domains using t-SNE Maaten and Hinton [2008]. We observe improved alignment between the class boundaries of the source and target domains.
Method Ar Cl Ar Pr Ar RW Cl Ar Cl Pr Cl Rw Pr Ar Pr Cl Pr Rw Rw Ar Rw Cl Rw Pr Avg
No Adaptation 44.6 62.7 72.0 52.1 62.7 65.1 52.9 43.0 73.9 63.7 45.8 77.3 59.7
DeepJdot Bhushan Damodaran et al. [2018] 39.7 50.4 62.5 39.5 54.4 53.2 36.7 39.2 63.5 52.3 45.4 70.5 50.6
DAN LongM et al. 43.6 57.0 67.9 45.8 56.5 60.4 44.0 43.6 67.7 63.1 51.5 74.3 56.3
DANN GaninY et al. 45.6 59.3 70.1 47.0 58.5 60.9 46.1 43.7 68.5 63.2 51.8 76.8 57.6
JAN Long et al. [2017] 45.9 61.2 68.9 50.4 59.7 61.0 45.8 43.4 70.3 63.9 52.4 76.8 58.3
CDAN Long et al. [2018] 50.7 70.6 76 57.6 70 70 57.4 50.9 77.3 70.9 56.7 81.6 65.8
SALT 49.6 67.7 74.2 59.9 68.4 71.4 57.6 48.6 77.3 67.6 54.3 78.4 64.6
Table 3: Classification accuracy on Office-Home dataset. Best performance is shown in bold, and the second best in bold italic.

4.4 Office-Home

Datasets: This challenging dataset VenkateswaraH et al. is comprised of 15,500 images in 65 classes from office and home settings, forming four extremely dissimilar domains: Artistic images (Ar), Clip Art (Cl), Product images (Pr), and Real-World images (Rw).

Model: Similar to section 4.1, we fine tune a pre-trained ResNet-50 and obtain the -d features and the subspace dimension is chosen to be . The classifier and subspace alignment network are trained with the same hyper-parameters as earlier. Comparisons to the state-of-the-art methods are reported in table 3. We observe that while SALT consistently outperforms baseline methods including the recent DeepJdot Bhushan Damodaran et al. [2018], it is slightly inferior to Long et al. [2018] in most of the tasks.

5 Conclusions

In this work, we present a principled and effective approach to tackle the problem of unsupervised domain adaptation, in the context of visual recognition. The proposed method – SALT– poses alignment as an auxiliary task to the primary task of maximizing performance on the source dataset. Building on insights from meta-learning literature, SALT proposes to solve domain alignment by utilizing gradients from the primary task. The alternating optimization between primary and auxiliary tasks, without refining the feature extractor, provides a venue for systematic control of domain alignment intended to achieve improved generalization to the target set. Through an extensive quantitative and qualitative evaluation, it is shown that SALT achieves performance that is comparable or higher than the state-of-the-art on multiple standard benchmarks. SALT is generic, and can be used in conjunction with any other feature extractor. Future work includes extending the SALT methodology to newer tasks such as as semantic segmentation, open-set classification Saito et al. [2018], and image-to-image translation.


  • [1] G. and. V. Lempitsky. Y.

    Unsupervised domain adaptation by backpropagation.


    International Conference on Machine Learning (ICML)

    , 2015.
  • Ben-David et al. [2010] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. A theory of learning from different domains. Mach. Learn., 79(1-2):151–175, May 2010. ISSN 0885-6125. doi: 10.1007/s10994-009-5152-4. URL
  • Bhushan Damodaran et al. [2018] B. Bhushan Damodaran, B. Kellenberger, R. Flamary, D. Tuia, and N. Courty.

    Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation.


    Proceedings of the European Conference on Computer Vision (ECCV)

    , pages 447–463, 2018.
  • Bousmalis et al. [2017] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 3722–3731, 2017.
  • Fernando et al. [2013] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013 Fernando et al. [2013], pages 2960–2967. doi: 10.1109/ICCV.2013.368. URL
  • Fernando et al. [2014] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Subspace alignment for domain adaptation. CoRR, abs/1409.5241, 2014.
  • Finn et al. [2017] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org, 2017.
  • French et al. [2018] G. French, M. Mackiewicz, and M. H. Fisher. Self-ensembling for visual domain adaptation. In The 6th International Conference on Learning Representations (ICLR), 2018.
  • [9] GaninY, UstinovaE, AjakanH, GermainP, LarochelleH, LavioletteF, MarchandM, and V. L. and. Domain-adversarial training of neural networks. The Journal of Machine Learning Research (JMLR), 17(1):2096–2030, 2016.
  • Gong et al. [2012] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2066–2073, 2012.
  • [11] GongB, ShiY, ShaF, and K. G. and. Geodesic flow kernel for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
  • [12] GoodfellowI, Pouget-AbadieJ, MirzaM, XuB, Warde-FarleyD, OzairS, CourvilleA, and Y. B. and. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014.
  • Gopalan et al. [2011] R. Gopalan, R. Li, and R. Chellappa. Domain adaptation for object recognition: An unsupervised approach. 2011 International Conference on Computer Vision, pages 999–1006, 2011.
  • [14] HeK, ZhangX, RenS, and J. S. and. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • Hoffman et al. [2013] J. Hoffman, E. Rodner, J. Donahue, K. Saenko, and T. Darrell. Efficient learning of domain-invariant image representations. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Conference Track Proceedings, 2013. URL
  • Hoffman et al. [2018] J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018 Hoffman et al. [2018], pages 1994–2003. URL
  • Hull [1994] J. J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550–554, 1994.
  • [18] IsolaP, Y. ZhuJ, ZhouT, and A. A. E. and.

    Image-to-image translation with conditional adversarial networks.

    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • Jaderberg et al. [2017] M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. URL
  • Kingma and Ba [2014] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Koch et al. [2015] G. Koch, R. Zemel, and R. Salakhutdinov. Siamese neural networks for one-shot image recognition. In

    ICML deep learning workshop

    , volume 2, 2015.
  • LeCun et al. [2010] Y. LeCun, C. Cortes, and C. Burges. Mnist handwritten digit database. AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist, 2:18, 2010.
  • Liebel and Körner [2018] L. Liebel and M. Körner. Auxiliary tasks in multi-task learning. CoRR, abs/1805.06334, 2018. URL
  • Liu and Tuzel [2016] M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in neural information processing systems, pages 469–477, 2016.
  • Liu et al. [2017] M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, pages 700–708. Curran Associates, Inc., 2017. URL
  • Liu et al. [2019] S. Liu, A. J. Davison, and E. Johns. Self-supervised generalisation with meta auxiliary learning. arXiv preprint arXiv:1901.08933, 2019.
  • Long et al. [2015] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 97–105., 2015. URL
  • Long et al. [2017] M. Long, H. Zhu, J. Wang, and M. I. Jordan.

    Deep transfer learning with joint adaptation networks.

    In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2208–2217. JMLR. org, 2017.
  • Long et al. [2018] M. Long, Z. Cao, J. Wang, and M. I. Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), pages 1647–1657, 2018. URL
  • [30] LongM, CaoY, WangJ, and M. I. J. and. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015.
  • Maaten and Hinton [2008] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
  • Munkhdalai and Yu [2017] T. Munkhdalai and H. Yu. Meta networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2554–2563, 2017. URL
  • Netzer et al. [2011] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. 2011.
  • Pan et al. [2010] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199–210, 2010.
  • Paszke et al. [2017] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
  • Ravi and Larochelle [2017] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. URL
  • Russakovsky et al. [2015] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
  • Saenko et al. [2010] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV, 2010.
  • Saito et al. [2018] K. Saito, S. Yamamoto, Y. Ushiku, and T. Harada. Open set domain adaptation by backpropagation. In The European Conference on Computer Vision (ECCV), September 2018.
  • Sankaranarayanan et al. [2018] S. Sankaranarayanan, Y. Balaji, C. D. Castillo, and R. Chellappa. Generate to adapt: Aligning domains using generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8503–8512, 2018.
  • Santoro et al. [2016] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented neural networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pages 1842–1850., 2016. URL
  • Shrivastava et al. [2014] A. Shrivastava, S. Shekhar, and V. M. Patel. Unsupervised domain adaptation using parallel transport on grassmann manifold. In Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pages 277–284. IEEE, 2014.
  • Shu et al. [2018] R. Shu, H. Bui, H. Narui, and S. Ermon. A DIRT-t approach to unsupervised domain adaptation. In International Conference on Learning Representations, 2018. URL
  • Sun and Saenko [2015] B. Sun and K. Saenko. Subspace distribution alignment for unsupervised domain adaptation. In BMVC, pages 24–1, 2015.
  • Sun and Saenko [2016] B. Sun and K. Saenko. Deep coral: Correlation alignment for deep domain adaptation. In European Conference on Computer Vision, pages 443–450. Springer, 2016.
  • Sun et al. [2017] B. Sun, J. Feng, and K. Saenko. Correlation alignment for unsupervised domain adaptation. In Domain Adaptation in Computer Vision Applications, pages 153–171. Springer, 2017.
  • Thopalli et al. [2019] K. Thopalli, R. Anirudh, J. J. Thiagarajan, and P. Turaga. Multiple subspace alignment improves domain adaptation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3552–3556. IEEE, 2019.
  • Toshniwal et al. [2017] S. Toshniwal, H. Tang, L. Lu, and K. Livescu. Multitask learning with low-level auxiliary tasks for encoder-decoder based speech recognition. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017, pages 3532–3536, 2017. URL
  • Tzeng et al. [2017] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. In Computer Vision and Pattern Recognition (CVPR), volume 1, page 4, 2017.
  • [50] VenkateswaraH, EusebioJ, ChakrabortyS, and S. P. and. Deep hashing network for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • Vinyals et al. [2016] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016.