Log In Sign Up

Latent Domain Learning with Dynamic Residual Adapters

by   Lucas Deecke, et al.

A practical shortcoming of deep neural networks is their specialization to a single task and domain. While recent techniques in domain adaptation and multi-domain learning enable the learning of more domain-agnostic features, their success relies on the presence of domain labels, typically requiring manual annotation and careful curation of datasets. Here we focus on a less explored, but more realistic case: learning from data from multiple domains, without access to domain annotations. In this scenario, standard model training leads to the overfitting of large domains, while disregarding smaller ones. We address this limitation via dynamic residual adapters, an adaptive gating mechanism that helps account for latent domains, coupled with an augmentation strategy inspired by recent style transfer techniques. Our proposed approach is examined on image classification tasks containing multiple latent domains, and we showcase its ability to obtain robust performance across these. Dynamic residual adapters significantly outperform off-the-shelf networks with much larger capacity, and can be incorporated seamlessly with existing architectures in an end-to-end manner.


page 1

page 2

page 3

page 4


DART: Domain-Adversarial Residual-Transfer Networks for Unsupervised Cross-Domain Image Classification

The accuracy of deep learning (e.g., convolutional neural networks) for ...

Domain Generalization Using a Mixture of Multiple Latent Domains

When domains, which represent underlying data distributions, vary during...

Dynamic Transfer for Multi-Source Domain Adaptation

Recent works of multi-source domain adaptation focus on learning a domai...

Disentangled Representation Learning for Domain Adaptation and Style Transfer

In order to solve unsupervised domain adaptation problem, recent methods...

Less-forgetful Learning for Domain Expansion in Deep Neural Networks

Expanding the domain that deep neural network has already learned withou...

Efficient parametrization of multi-domain deep neural networks

A practical limitation of deep neural networks is their high degree of s...

Compositional Models: Multi-Task Learning and Knowledge Transfer with Modular Networks

Conditional computation and modular networks have been recently proposed...

1 Introduction

While the performance of deep learning has surpassed that of humans in a range of tasks, machine learning models perform best when learning objectives are narrowly defined. Practical realities however often require the learning of joint models over semantically different information. In this case, best performances are usually obtained by fitting a collection of models, with each model solving an individual sub-task. This is somewhat disappointing, seeing how humans and other biological systems are capable of flexibly adapting to a large number of scenarios.

Past solutions that address this problem tend to fall into some category of multi-domain learning, where – different from the broader multi-task scenario – a single loss function is shared across tasks. Multi-domain learning however relies firmly on the availability of domain annotations, for example used to control modules in domain-specific architectures

(Rebuffi et al., 2018; Liu et al., 2019).

The reliance on domain annotations is however not limited to the multi-domain scenario, their presence is required in adversarial domain adaptation (Ganin et al., 2016), or multi-source domain adaptation (Xu et al., 2018; Peng et al., 2019a). Other examples of dependence on domain labels can be found in the continual learning literature, where task-specific memories guide the learning process (Lopez-Paz and Ranzato, 2017; Riemer et al., 2018), in domain generalization (Li et al., 2018, 2019b, 2019a), or meta learning (Finn et al., 2017).

Airc. C-100 Daim. Dtd Gtsrb Omn. Svhn Ucf101 Vgg-F. Avg.
0.052 0.156 0.091 0.028 0.122 0.1 0.406 0.016 0.03
9x ResNet26 39.48 77.96 99.95 38.19 99.95 87.62 95.12 73.00 65.20 87.01
ResNet26 31.35 70.71 99.49 33.67 99.87 87.80 94.64 58.25 60.39 84.73
Relative [%] -20.59 -10.25 -0.46 -13.42 -0.08 0.21 -0.51 -25.32 -7.96 -2.69
Table 1: Domain-wise and weighted accuracies for 9x ResNet26 learned individually on each domain, versus a single ResNet26 that learns on all domains jointly. ✓ and ✗ denote presence or absence of domain annotations . indicates the overall share of each domain; on smaller ones (e.g. Aircraft) performance losses are significant.

The above approaches are successful when domain annotations are available. In the real world however, these can be difficult or expensive to obtain. Consider images that were scraped from the web. Existing multi-domain approaches would require that these scraped images are further annotated for the mixture of content types they will necessarily contain, such as real world images or studio photos (Saenko et al., 2010), clipart or sketches (Li et al., 2017).

In this paper we relax this requirement, and consider the alternative scenario of latent domain learning. This encompasses any task where we have reason to believe that some partitioning of the data would make sense, but we are uncertain about what a good partitioning might look like, or have inadequate resources to label all data. And as our experiments show, even when domain labels exist, there is no guarantee that these were chosen optimally, nor that learning them isn’t the better option.

Learning on latent domains poses a major problem for standard deep learning approaches, which have a strong tendency to overfit to large modes in data. This issue is displayed in Table 1: the first row shows performance of 9x ResNet26 architectures (He et al., 2016) individually finetuned to all datasets in Visual Decathlon (Rebuffi et al., 2017).111

Except ImageNet

(Deng et al., 2009), which we omit due to its overweight. The second row shows the performance of a single ResNet26 jointly learned on all these tasks. While the overall loss in weighted accuracy appears modest (-2.69%), domain-conditional accuracies reveal that the single model achieves this feat by overaccounting for large datasets (Svhn, Omniglot, etc.), while disregarding smaller ones (Aircraft, Dtd, Ucf101).

We illustrate the difference between multi-domain and latent domain learning scenarios via graphical models in Figure 1, and further discuss this difference in Section 3.1. We subsequently propose novel mechanisms designed to address the central issues in latent domain learning: dynamic residual adapters (Section 3.3) allow us to achieve robust performance metrics on small domains, all without trading performance on larger ones (see experiments in Section 4). Our proposed module is efficient, can be incorporated and trained seamlessly with existing architectures, and is able to surpass the performance of domain-supervised approaches that rely on human-annotated data (Section 4.2).

2 Related work

Multi-domain learning relates most closely to our paper. The state-of-the-art introduces small convolutional corrections in residual networks to account for individual domains (Rebuffi et al., 2017, 2018). Stickland and Murray (2019) extend this approach to obtain efficient multi-task models for related language tasks. Other recent work makes use of task-specific attention mechanisms (Liu et al., 2019), attempts to scale task-specific losses (Kendall et al., 2018), or addresses tasks individually at the level of gradients (Chen et al., 2017).

A lack of domain annotations has recently attracted interest in unsupervised domain adaptation. Mancini et al. (2018)estimate batch statistics of domain adaptation layers with Gaussian mixture models using only few domain labels. Peng et al. (2019b) study the shift from some source domain to a target distribution that contains multiple latent domains. In our setting, there is no shift between source and target distributions, instead the focus lies on learning parameter efficient models that generalize well across multiple latent domains simultaneously.

Previous work that directly addresses latent domain learning can be found in Xu et al. (2014) which use exemplar SVMs to account for latent domains and thereby generalize to new ones, while Xiong et al. (2014) study the discovery of latent domains by clustering via maximization of mutual information.

Our work connects to multi-modal learning: Chang et al. (2018) propose an architecture for person reidentification that accounts for modality in distributions. Deecke et al. (2018) normalize data in separate batches to account for differences in feature distributions. Furthermore, our work is loosely related to learning universal representations (Bilen and Vedaldi, 2017), which Tamaazousti et al. (2019) use as a guiding principle in designing more transferable models.

Dynamic architectures have recently attracted considerable attention, with solutions from reinforcement learning

(Zoph and Le, 2016; Pham et al., 2018) or using Bayesian optimization (Kandasamy et al., 2018). For differentiable dynamic architectures, two components are commonly used: Gumbel-softmax sampling (Jang et al., 2016)

, e.g. leveraged in dynamic computer vision architectures

(Veit and Belongie, 2018; Sun et al., 2019), and mixtures of experts (Jacobs et al., 1991; Jordan and Jacobs, 1994), used to scale deep learning models to large problems spaces (Shazeer et al., 2017), and for universal object detection (Wang et al., 2019).

From the perspective of algorithmic fairness, a desirable property for models is to ensure consistent predictive equality across different identifiable subgroups in the data (Zemel et al., 2013; Hardt et al., 2016; Fish et al., 2016; Corbett-Davies et al., 2017). This relates to the central goal in latent domain learning, which is to ensure robustness on small latent domains.

3 Method

3.1 Problem setting

In multi-domain learning, it is assumed that data is sampled i.i.d. from some mixture of domain distributions , with domain labels . Together, they constitute the underlying distribution as , where each domain is associated with a relative share , with the total number of samples, and those belonging to the ’th domain. The target space is usually made up of mutually exclusive classes of the underlying domains, i.e. . In multi-domain learning, the domain label is always available.

While the two are closely related, in the broader multi-task scenario the nature of underlying tasks is inherently different, and learning on each task distribution is associated with an individual loss function (for example, one task may be object classification, the other semantic segmentation). When the are equivalent across tasks, multi-task learning and multi-domain learning coincide.

Different from learning in a traditional multi-domain setting, in latent domain learning the information associating each sample to a domain is no longer available. As such, we cannot infer labels from sample-domain pairs and are instead forced to learn a single model only from .

A common baseline in multi-domain learning is to finetune a set of individual models, one for each domain. The task in the multi-domain literature is then to overcome this baseline (Rebuffi et al., 2018). We use this particular baseline throughout the paper, and show that in some cases, even when domain annotations were chosen carefully, a domain-unsupervised approach can surpass the performance of domain-supervised ones, see Section 4.2.1.

Figure 1: Graphical models for (a) multi-domain learning, (b) latent domain learning. (c) A ResNet block, equipped with a dynamic residual adapter (). Incoming samples x pass down three streams: an identity function, a transformation via a large convolution f, as well as an evaluation by expert gates g, which dynamically assigns (dashed arrows) small corrections h1 and h2.

3.2 Residual adaptation

A widely adopted approach in multi-domain learning builds on the assumption that features from large pretraining tasks are universal, and only small convolutional transformations at each layer of the network are needed to correct for domain-specific differences. Rebuffi et al. (2017, 2018) use this insight to extend the layer-wise transformation of the widely adopted residual architecture (He et al., 2016):


where are the main convolutions of the residual network, and are light-weight (i.e. ) convolutional corrections. In this work access to is removed, resulting in two new challenges: we have no a priori information about the right number of corrections , and we cannot use to decide which one of these to apply. Throughout the next section, we present an alternative strategy of inspecting and choosing relevant corrections on the fly.

3.3 Dynamic residual adapters

While there is no access to domain labels in latent domain learning, we may still assume is constituted by several domain distributions . In order to account for these in a fully domain-unsupervised fashion, we propose the use of dynamic residual adapters (DRA): each incoming sample is first processed by a set of corrections , which we parametrize with light-weight 1x1 convolutions. Next, a noisy mixtures of experts (MoE) gating mechanism is responsible for weighing each correction , under which is then transformed. In the ’th layer of the network, the subsequent feature representation computes as


with the ’th component of the ’th gating function. For an illustration, see Figure 1 (c). While we motivate DRA from learning on latent domains, note that many additional factors (e.g. shape, pose, color) may enter the gate assignments as well. Extending networks with dynamic residual adapters exhibits strong performance on smaller modes, while retaining performance on larger ones, see our experiments in Section 4.

We follow Shazeer et al. (2017) and parametrize the gating units via self-attention (Lin et al., 2017)

. The only learnable parameters are those of a small linear transformation

, resulting in a light-weight parametrization of the gates as


where is a projection onto the channel dimension that averages out height and width (average pooling), a channel-wise exploration noise.222Exploration noise is fixed at , zero otherwise. A final softmax ensures the gating mechanism corresponds to a valid categorical distribution over latent domains, i.e. and . How to choose is discussed in more detail in Section 4.1.1.

In practice, many other parametrizations of the gating function are possible. Self-attention gives rise to smooth assignments, allowing the weighted combination of different . Discrete assignments can be enforced through Gumbel-Softmax sampling (Jang et al., 2016)

. In practice, we found the latter approach to be too restrictive, and a smooth interpolation via MoE to be the more favorable option. We compare gate parametrizations in Table

4 of the Appendix.

Airc. C-100 Daim. Dtd Gtsrb Omn. Svhn Ucf101 Vgg-F. Avg.
0.052 0.156 0.091 0.028 0.122 0.1 0.406 0.016 0.03
9x ResNet26 39.48 77.96 99.95 38.19 99.95 87.62 95.12 73.00 65.20 87.01
ResNet26 31.35 70.71 99.49 33.67 99.87 87.80 94.64 58.25 60.39 84.73
ResNet56 34.62 71.63 99.52 34.79 99.90 87.72 95.12 60.66 57.55 85.22
Ours 38.28 78.16 99.13 40.64 99.77 86.61 94.17 63.88 69.80 86.46
Table 2: Performance of 9x ResNet26 individually finetuned to all domains, versus ResNet26, ResNet56 and our dynamic residual adapters with . Best latent domain performance highlighted.

3.4 Style exchange augmentation

The central challenge in latent domain learning is the tendency of large domains in to suppress smaller ones. Besides accounting for this via dynamic residual adapters, we introduce an augmentation technique that encourages information exchange between domains.

We are motivated by the following example: assume two classes (say, cats and dogs) each with two latent domains (sketches and photos). Ideally, we would want to encourage the model to learn a domain-agnostic representation of , from which it may infer , invariant of . We achieve this here by augmenting with the style information of a second sample , drawn at random from (so potentially, but not necessarily crossing domains).

To ensure that this can be done with small computational overhead, we augment samples after they have been compressed into a dense representation, i.e. after the last convolutional layer of the network. Formally, we factorize the model (with convolutional parameters and

) and subsequent classifier into

. As shorthand for a sample’s final representation we denote , and map the low-level feature representation of the pertubation onto the target via


borrowing from recent work in the style transfer literature (Huang and Belongie, 2017).

is introduced to scale the augmentation (higher values will augment more aggressively), moments

are estimated across channel, height and width of and , respectively.

In our experiments, we randomly pair samples in each mini batch. We find that modest values for work best, as augmenting to aggressively risks breaking the relationship between and its corresponding label , see Figure 4 in the Appendix. When learning on latent domains, exchanging feature-level style information between samples works much better in practice than applying other recently proposed generic approaches, e.g. MixUp (Zhang et al., 2017), c.f. the ablation in Table 4.

4 Experiments

We consider two experimental settings to evaluate our proposed approaches: a traditional multi-domain scenario but without access to domain information (Section 4.1), and a single dataset that contains multiple latent domains (Section 4.2).

4.1 Latent multi-domain

The first trial combines nine datasets from the Visual Decathlon challenge (Rebuffi et al., 2017) that contain a variety of different images, with mutually exclusive labels, i.e. .333For SVHN and CIFAR-100, this would for example give rise to a 110-dimensional label space. Unlike in multi-domain learning however, in latent domain learning models may erroneously classify samples from CIFAR-100 to SVHN, and vice versa. Note the goal here is not to compare to the performance of domain-supervised approaches that Visual Decathlon was designed for, but to show that deep networks struggle with incorporating small latent domains when no domain annotations are provided.

4.1.1 Optimization

Initial ResNet parameters were obtained from pretraining on ImageNet (Deng et al., 2009). For dynamic residual adapters, only gates and corrections are learned, the ResNet26 backbone remains fixed at its initial parameters. This requires less parameters to be learned (see Figure 2, left), while also benefiting performance (c.f. Table 4

). We followed the exact same optimization routine across models and experiments: we trained for 120 epochs using stochastic gradient descent (momentum parameter of 0.9), batch size of 128, weight decay of

, and initial learning rate of 0.1 (reduced by 1/10 at epochs 80, 100). For dataset splits, we followed Rebuffi et al. (2017). Test accuracies displayed in tables were obtained by averaging over results from five random initializations.

We experimented with different values for the amount of style exchange. While a range of values improve over having no augmentation, the best results were obtained by limiting this to a modest amount (results shown use throughout), see Figure 4. Setting provides the network with corrections (where denotes the number of layers), which was sufficient to achieve robust performance across latent domains. There is a small but limited performance gain from increasing further, see Section 4.2.

Param.[] Art Painting Cartoon Photo Sketch Avg.
0.205 0.235 0.167 0.393
RA (Rebuffi et al., 2018) 2.6 mil 86.47 92.37 95.15 94.61 92.51
4x ResNet26 24.8 mil 88.77 95.97 95.95 95.83 94.44
ResNet26 6.2 mil 84.42 94.66 94.98 95.42 92.91
MLFN (Chang et al., 2018) 7.6 mil 78.50 91.29 89.97 93.20 89.20
Ours, 1.4 mil 92.15 96.62 97.09 95.34 95.28
Ours, 2.8 mil 92.75 95.97 97.73 95.53 95.43
Table 3: Results on the PACS dataset. Shown are performances for ResNet26, MLFN, a domain-supervised ensemble of 4x ResNet26, and DRA (with ). Third column lists the number of parameters that have to be learned in each approach.

4.1.2 Results

For a domain-supervised baseline, we finetune 9x ResNet26 (He et al., 2016) (four layers in each block; channel widths of 64, 128, 256), one individual model for each latent domain . We then use the exact same model to learn a joint classifier on . Next, we couple dynamic residual adapters with the very same ResNet26. For further comparison, we also include a significantly deeper ResNet56. Results are shown in Table 2.

Loosing access to domain labels significantly harms performance (9x ResNet26 versus ResNet26). While performance on larger domains is not impacted, the performance on smaller domains (Dtd, Ucf101, Vgg-Flowers) suffers considerably.

This problem is not addressed by simply increasing the depth of the network: while weighted accuracy increases slightly, a ResNet56 suffers the same issues, leaking performance on small domains. Without ever having access to domain annotations, the flexible assignments of convolutional corrections in our dynamic residual adapters close a large portion of this gap. For further analysis on how this is achieved, see Section 4.2.2 and 4.2.3.

4.1.3 Memory requirements

Adding corrections to the residual network results in additional memory requirements.444At each layer learnable parameters are needed to parametrize gates and corrections, respectively. We show in Figure 2 (left) that these are very modest. In particular when comparing the number of learnable parameters, dynamic residual adapters have a significant advantage: the pretrained convolutions of the ResNet stay fixed throughout, only the gates and corrections have to be learned, amounting to less than 20% of parameters in the network.


Figure 2: Left: memory requirements of ResNet26 (), ResNet56 (), and 9x ResNet26 (). For our dynamic residual adapters (), only a small portion of parameters need to be learned (). Right: Activations of the gating mechanism for samples from each domain distribution at different layers of the network. The domains that activate each gate strongest are highlighted.

4.2 Joint label space

The second trial examines performance on a dataset called PACS (Li et al., 2017), standing for its four constituting domains (art painting, cartoon, photo, sketch; examples shown in Figure 3, right). Each domain contains samples of equivalent classes (“giraffe”, “guitar”, etc.). The domains are unbalanced (see in Table 3).

We reserved 20% of samples for evaluation, leaving the remainder for training. Splits were computed at random, as we assume no a priori knowledge of domain memberships. We make no changes to the optimization described in Section 4.1.1.

4.2.1 Results

The results in Table 3 show that dynamic residual adapters improve considerably over a single ResNet26 baseline. While the largest domain sketch is handled well by the traditional model, dynamic residual adapters can much better account for the small domains that also constitute .

To the best of our knowledge, latent domain learning has not been targeted through customized deep learning architectures. A related baseline is MLFN (Chang et al., 2018), which builds on ResNeXt (Xie et al., 2017) to define a latent-factor architecture that accounts for multi-modality in data. Crucially, where we share small convolutional corrections at every layer, MLFN instead enables and disables entire network blocks. Arguably, our more fine-grained approach to parameter sharing allows us to outperform MLFN.

We also evaluate domain-supervised residual adapters (Rebuffi et al., 2018). While these have been shown to work extremely well in the multi-domain scenario, their performance here was sub-par. This is likely a result of its per-domain corrections , which exhibit no cross-domain sharing of parameters. Dynamic residual adapters share parameters natively across domains: this substantially benefits performance on domains like art painting (92.15% versus 84.42%), which shares a lot of visual information with photo (c.f. Figure 3, left).

Lastly, we finetune 4x individual ResNet26 to each PACS domain for a strong domain-supervised baseline. Unsurprisingly, this outperforms the single ResNet26 trained jointly on all domains. While requiring only a fraction of learnable parameters (mil versus mil), DRA however surpasses the performance of the fully domain-supervised ensemble by sharing model parameters across domains.

We further examine the benefit of introducing larger numbers of residual corrections . While the performance edges up slightly, the sequential nature of the network arguably already allows complicated partitionings of the residual corrections for , making larger unnecessary.

Domain-supervised approaches can be incorporated into DRA by setting , and fixing gate assignments to across the network. This also encompasses any potential clustering of domains, which introduces a different, but fixed set of global gate activations. Evident from the results shown here, such global assignments are not always optimal, even when domains have been assigned as carefully as in PACS. Dynamic residual adapters remove the need to decide a priori what constitutes good domain separations, and instead dynamically share or separate features at every layer. We analyze how this occurs in additional detail in the next two sections, by looking at activation paths across the network.

4.2.2 Gate activations

In Figure 2 (right) we show average per-layer activations of the gating mechanism. Different domains activate gates at different depths of the network, while visually similar domains (such as art painting and photo, c.f. examples in Figure 3, left) tend to activate together. At some layers there is little need for multiple corrections . In this case, dynamic residual adapters simply relax to a uniform gate: the activation becomes , joining the unit into a single residual correction .

4.2.3 Gate pathways


Figure 3: Left: PCA of samples represented by their -dimensional activation paths. Gate paths are semantically meaningful: visually similar domains art painting and photo (,) cluster together, cartoon () resides between real world imagery and sketches (). A sample with an erroneous ground-truth domain label is highlighted. Right: a group of samples (coloring corresponds to left-hand side) that share similar gate activation paths.

Intuitively, one might expect the gating mechanism to assign different convolutional corrections to the different human-annotated domains in PACS. The visualization of gate activations in Figure 2 (right) seems to suggest the opposite: the purity (share of maximum activation relative to all activations) is relatively low across the network.

Arguably, the above intuition oversimplifies how the network processes different samples, and is further contradicted by the performance loss that results from enforcing discreteness (c.f. Gumbel-softmax in Table 4). To understand better how partitioning occurs, it is helpful to inspect what happens across the entirety of the network, and compare sets of feature activations throughout their processing. For a sample , we define its ’th activation path across the -layered network as


If samples have similar activations paths, this means they also share a large amount of parameters. As a group of samples with low pairwise distances in Figure 3

(right) shows, similar gate activations are indicative of visual similarity: pose, color or edges of samples that group together are visibly related, compare in particular the pose of elephants from the

photo and sketch domains.

We collected gate activation paths for (an equal number) of samples from all domains, and visualize their principal components in Figure 3 (left). This reveals an intuitive clustering of domains: visually similar domains art painting and photo (,) share a region. The manifold describing sketches () is arguably more primitive than those of the other domains, and indeed only maps to a small region. Cartoon () lies somewhere between sketches and real world imagery. Under visual inspection (c.f. examples in Figure 3, right) this makes perfect sense: a cartoon is, more or less, just a colored sketch. We highlight one particular elephant that resides amongst the cartoon domain, but has been annotated as photo in the PACS dataset. The ground-truth annotation is incorrect, but different from domain-supervised approaches, dynamic residual adapters are not irritated by this.

5 Conclusion

Recent work in multi-domain learning has been chiefly focused on a setting where domain annotations are assumed to be routinely available. As this requires careful curation of datasets, in real world scenarios this assumption can often be of limited merit. Dynamic residual adapters help inject adaptivity into networks, preventing them from overfitting to the largest domains in distributions, a common failure mode of traditional models. Not only does our approach successfully close a large amount of the performance gap to domain-supervised solutions, but in some scenarios – even when domains have been assigned very carefully – exceeds their performance.


  • Bilen and Vedaldi (2017) Hakan Bilen and Andrea Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275, 2017.
  • Chang et al. (2018) Xiaobin Chang, Timothy M Hospedales, and Tao Xiang. Multi-level factorisation net for person re-identification. In

    Conference on Computer Vision and Pattern Recognition

    , 2018.
  • Chen et al. (2017) Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. arXiv preprint arXiv:1711.02257, 2017.
  • Corbett-Davies et al. (2017) Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017.
  • Deecke et al. (2018) Lucas Deecke, Iain Murray, and Hakan Bilen. Mode normalization. In International Conference on Learning Representations, 2018.
  • Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, 2009.
  • Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, 2017.
  • Fish et al. (2016) Benjamin Fish, Jeremy Kun, and Ádám D Lelkes. A confidence-based approach for balancing fairness and accuracy. In SIAM International Conference on Data Mining, 2016.
  • Ganin et al. (2016) Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1), 2016.
  • Hardt et al. (2016) Moritz Hardt, Eric Price, and Nati Srebro.

    Equality of opportunity in supervised learning.

    In Advances in Neural Information Processing Systems 30, 2016.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition, 2016.
  • Huang and Belongie (2017) Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In International Conference on Computer Vision, 2017.
  • Jacobs et al. (1991) Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. MIT Press, 1991.
  • Jang et al. (2016) Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
  • Jordan and Jacobs (1994) Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural computation, 6(2):181–214, 1994.
  • Kandasamy et al. (2018) Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric P Xing. Neural architecture search with bayesian optimisation and optimal transport. In Advances in Neural Information Processing Systems 32, 2018.
  • Kendall et al. (2018) Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Conference on Computer Vision and Pattern Recognition, 2018.
  • Li et al. (2017) Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In International Conference on Computer Vision, 2017.
  • Li et al. (2019a) Da Li, Jianshu Zhang, Yongxin Yang, Cong Liu, Yi-Zhe Song, and Timothy M Hospedales. Episodic training for domain generalization. In Conference on Computer Vision and Pattern Recognition, 2019a.
  • Li et al. (2018) Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In Conference on Computer Vision and Pattern Recognition, 2018.
  • Li et al. (2019b) Yiying Li, Yongxin Yang, Wei Zhou, and Timothy M Hospedales. Feature-critic networks for heterogeneous domain generalization. In International Conference on Machine Learning, 2019b.
  • Lin et al. (2017) Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. In Proc. ICLR, 2017.
  • Liu et al. (2019) Shikun Liu, Edward Johns, and Andrew J Davison. End-to-end multi-task learning with attention. In Conference on Computer Vision and Pattern Recognition, 2019.
  • Lopez-Paz and Ranzato (2017) David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems 31, 2017.
  • Mancini et al. (2018) Massimiliano Mancini, Lorenzo Porzi, Samuel Rota Bulò, Barbara Caputo, and Elisa Ricci. Boosting domain adaptation by discovering latent domains. In Conference on Computer Vision and Pattern Recognition, 2018.
  • Peng et al. (2019a) Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In International Conference on Computer Vision, 2019a.
  • Peng et al. (2019b) Xingchao Peng, Zijun Huang, Ximeng Sun, and Kate Saenko. Domain agnostic learning with disentangled representations. arXiv preprint arXiv:1904.12347, 2019b.
  • Pham et al. (2018) Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
  • Rebuffi et al. (2018) S-A. Rebuffi, H. Bilen, and A. Vedaldi. Efficient parametrization of multi-domain deep neural networks. In Conference on Computer Vision and Pattern Recognition, 2018.
  • Rebuffi et al. (2017) Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Learning multiple visual domains with residual adapters. In Advances in Neural Information Processing Systems 31, 2017.
  • Riemer et al. (2018) Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910, 2018.
  • Saenko et al. (2010) Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European Conference on Computer Vision, 2010.
  • Shazeer et al. (2017) Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations, 2017.
  • Stickland and Murray (2019) Asa Cooper Stickland and Iain Murray. BERT and PALs: Projected attention layers for efficient adaptation in multi-task learning. Proceedings of the 36th International Conference on Machine Learning, 2019.
  • Sun et al. (2019) Ximeng Sun, Rameswar Panda, and Rogerio Feris. AdaShare: Learning what to share for efficient deep multi-task learning. arXiv preprint arXiv:1911.12423, 2019.
  • Tamaazousti et al. (2019) Youssef Tamaazousti, Hervé Le Borgne, Céline Hudelot, Mohamed El Amine Seddik, and Mohamed Tamaazousti.

    Learning more universal representations for transfer-learning.

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
  • Veit and Belongie (2018) Andreas Veit and Serge Belongie. Convolutional networks with adaptive inference graphs. In European Conference on Computer Vision, 2018.
  • Wang et al. (2019) Xudong Wang, Zhaowei Cai, Dashan Gao, and Nuno Vasconcelos. Towards universal object detection by domain attention. In Conference on Computer Vision and Pattern Recognition, 2019.
  • Xie et al. (2017) Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Conference on Computer Vision and Pattern Recognition, 2017.
  • Xiong et al. (2014) Caiming Xiong, Scott McCloskey, Shao-Hang Hsieh, and Jason J Corso. Latent domains modeling for visual domain adaptation. In

    Twenty-Eighth AAAI Conference on Artificial Intelligence

    , 2014.
  • Xu et al. (2018) Ruijia Xu, Ziliang Chen, Wangmeng Zuo, Junjie Yan, and Liang Lin. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In Conference on Computer Vision and Pattern Recognition, 2018.
  • Xu et al. (2014) Zheng Xu, Wen Li, Li Niu, and Dong Xu. Exploiting low-rank structure from latent domains for domain generalization. In European Conference on Computer Vision, 2014.
  • Zemel et al. (2013) Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, 2013.
  • Zhang et al. (2017) Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
  • Zoph and Le (2016) Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.



The ablations in Table 4 show that latent domain learning benefits from both the addition of multiple dynamic residual adapters (i.e. ) as well as style exchange augmentation between the feature maps of samples ().

While adding a single residual adapter () helps the model with smaller modes, it registers performance losses on medium and larger ones. In line with what Rebuffi et al. [2017] report, when not fixing parameters of the ResNet convolutions, this leads to problems with overfitting.

With no augmentation via style exchange (), performance drops visibly. MixUp [Zhang et al., 2017], an alternative augmentation that interpolates between samples, is not equally well suited for latent domain learning. Results for additional choices of are shown in Figure 4: modest values for work best, as augmenting too aggressively risks breaking the relationship between image-label pairs.

Replacing mixtures of experts with Gumbel-softmax sampling negatively impacted performance: while some domains (CIFAR-100, Omniglot) are handled well, discrete gates struggle particularly with smaller ones. Performance for soft and straight-through Gumbel-softmax sampling was on par, and we report straight-through sampling here.

Airc. C-100 Daim. Dtd Gtsrb Omn. Svhn Ucf101 Vgg-F. Avg.
36.24 75.33 98.89 37.87 99.67 86.68 94.02 60.04 68.43 85.65
36.15 77.65 99.06 39.06 99.67 86.48 93.97 60.27 65.86 85.94
Learn 36.75 71.39 99.47 35.27 99.85 87.32 94.73 62.81 64.80 85.35
MixUp 30.66 67.40 97.09 36.49 99.72 86.03 93.41 56.25 65.78 83.47
Gumbel 33.21 76.43 98.88 37.45 99.64 86.83 93.67 60.81 69.31 85.56
Ours 38.28 78.16 99.13 40.64 99.77 86.61 94.17 63.88 69.80 86.46
Table 4: An ablation study of our approach. First row uses a single () residual adapter in each layer, second row shows results when disabling style-exchange augmentation, i.e. setting to zero. Third row finetunes all parameters, not just dynamic residual adapters. Fourth row couples dynamic residual adapters with MixUp. Fifth row contains results for a parametrization of the gates via straight-through Gumbel-softmax sampling. Results for dynamic residual adapters with shown last.

Figure 4: Weighted average performance for DRA under different augmentation strengths .