Deep models have been shown to outperform human evaluators on image recognition tasks . However, a common assumption in such evaluations is that the training and the test data distributions are alike. In the presence of a larger domain-shift  between the training and the test domains, the performance of deep models degrades drastically resulting from the domain-bias [18, 50]. Moreover, the recognition capabilities of such models is limited to the set of learned categories, which further limits their generalizability. Thus, once a model is trained on a source training dataset (the source domain), it is essential to further upgrade the model to perform well in the test environment (the target domain).
For example, consider a self-driving car installed with an object recognition model trained on urban scenes. Such a model will underperform in rural landscapes (test environment) where objects differ in their visual appearance and the surrounding context. Moreover, the model will also misclassify objects from unseen categories (a.k.a target-private categories) into one of the learned classes. This is a direct result of the domain-shift between urban and rural environments. A naive approach to address this problem would be to fine-tune  the model on an annotated dataset drawn from the target environment. However, this is often not a practical solution as acquiring label-rich data is an expensive process. Moreover, for an efficient model upgrade, it is also imperative that the model supports adaptation to new domains and tasks, without re-training on the source training data [7, 28] from scratch. Motivated by these challenges, in this paper we ask “how to effectively upgrade a trained model to the target domain?”.
In the literature, this question has been long-standing. A line of work called Unsupervised Domain Adaptation (UDA) [2, 3, 5, 21, 25, 31, 22, 51] has emerged that offers an elegant solution to the domain-shift problem. In UDA, the usual practice [10, 37] is to obtain a labeled source dataset and unlabeled targets samples, to perform adaptation under the co-existence of samples from both the domains. However, most UDA methods [10, 12, 44, 52] assume that the two domains share the same label space (as shown in Fig. 1A), making them impractical in real-world where a target domain potentially contains unseen categories (in the self-driving car example, novel objects occur in the deployed environment). To this end, open-set DA [1, 36, 24, 45] and universal DA [23, 54] have gained attention, where the target domain is allowed to have novel (target-private) classes not present in the source domain. These target-private samples are assigned an “unknown” label (see Fig. 1B). As a result, target-private samples with diverse semantic content get clustered together in a single “unknown” class in the latent space.
While UDA methods tackle the domain-shift problem, these require simultaneous access to both source and target domain samples, which makes them unsuitable in cases where the source training data is proprietary [24, 32, 34] (e.g. in a self-driving car), or simply unavailable during model upgrade [7, 23, 28]. Moreover, these methods can only detect new target categories as a single unknown class , and cannot assign individual semantic labels to such categories (Fig. 1C). Thus, these methods do not truly facilitate model upgrade (e.g. adding new classes to the recognition model) thereby having a limited practical use-case.
Another line of work consists of Class-Incremental (CI) learning methods [4, 28, 38, 42, 53] which aim at adding new classes to a trained model while preserving the performance on the previously learned classes. Certain methods  achieve this even without accessing the source training data (hereon, we call such methods as source-free). However, these methods are not tailored to address domain-shift (thus, in our example, the object recognition model would still underperform in rural scenarios). Moreover, many of these methods [4, 7, 41] require the target data to be labeled, which is impractical for real world applications.
To summarize, UDA and CI methods address different challenges under separate contexts and neither of them alone suffices practical scenarios. A characteristic comparison against prior arts is given in Table 1. Acknowledging this gap between the available solutions and their practical usability, in this work we introduce a new paradigm called Class-Incremental Domain Adaptation (CIDA) with the best of both worlds. While formalizing the paradigm, we draw motivation from both UDA and CI and address their limitations in CIDA.
In CIDA, we aim to adapt a source-trained model to the desired target domain in the presence of domain-shift as well as unseen classes using a minimal amount of labeled data. To this end, we propose a novel training strategy which enables a source-free upgrade to an unlabeled target domain by utilizing one-shot target-private samples. Our approach is motivated by prototypical networks  which exhibit a simpler inductive bias in the limited data regime. We now review the prior arts and identify their limitations to design a suitable approach for CIDA. Our contributions are as follows:
We formalize a novel Domain Adaptation paradigm, Class-Incremental Domain Adaptation (CIDA), which enables the recognition of both shared and novel target categories under a domain-shift.
We discuss the limitations of existing approaches and identify the challenges involved in CIDA to propose an effective training strategy for CIDA.
The proposed solution is motivated by theoretical and empirical observations and outperforms both UDA and CI approaches in CIDA.
Before formalizing the CIDA paradigm, we review the prior methods and study their limitations. In the UDA problem, we consider a labeled source domain with the label-set and an unlabeled target domain with the label-set . The goal is to improve the task-performance on the target domain by transferring the task-relevant knowledge from the source to the target domain.
that is common to both the domains, and a classifierwhich can be learned using source supervision. These methods align the latent features of the two domains and use the classifier to predict labels for the target samples. A theoretical upper bound  for the target-domain risk of such predictors is as follows,
where, given a hypothesis space , and denote the expected risk of the classifier in the source and the target domains respectively, and measures the distribution shift (or the domain discrepancy) between the two domains and is a constant that measures the risk of the optimal joint classifier.
Notably, UDA methods aim to minimize the upper bound of the target risk (Eq. 1) by minimizing the distribution shift in the latent space , while preserving a low source risk . This works well under the closed-set assumption (i.e. ). However, in the presence of target-private samples (i.e. samples from ), a direct enforcement of such constraints often degrades the performance of the model, even on the shared categories - a phenomenon known as negative transfer . This is due to two factors. Firstly, a shared feature extractor (), which is expected to generalize across two domains, acts as a bottleneck to the performance on the target domain. Secondly, a shared feature extractor enforces a common semantic granularity in the latent space () across both the domains. This is especially unfavorable in CIDA, where the semantic space must be modified to accommodate target-private categories (see Fig. 3).
Why are UDA methods insufficient? Certain UDA methods [29, 54] tackle negative transfer by detecting the presence of target-private samples and discarding them during domain alignment. As a result, these samples (with diverse semantic content) get clustered into a single unknown category. While this improves the performance on the shared classes, it disturbs the semantic granularity of the latent space (i.e. ), making the model unsuitable for a class-incremental upgrade. This additional issue must be tackled in CIDA.
To demonstrate this effect, we employ the state-of-the-art open-set DA method STA  for image recognition on the Amazon DSLR task of Office  dataset. A possible way to extend STA for CIDA would be to collect the target samples that are predicted as unknown (after adaptation) and obtain few-shot labeled samples from this set (by randomly labeling, say, of the samples). One could then train a classifier using these labeled samples. We follow this approach and over 5 separate runs, we calculate the class-averaged accuracy. The model achieves an accuracy of on the shared classes, while only on the target-private classes. See Suppl. for experimental details. This clearly indicates that the adaptation disturbs the granularity of the semantic space , which is no more useful for discriminating among novel target categories.
Why are CI methods insufficient? Works such as [4, 33, 41] use an exemplary set to receive supervision for the source classes along with labeled samples from target-private classes.  aims to address domain-shift using labeled samples. However, the requirement of the source data during model upgrade is a severe drawback for practical applications . While  is source-free, it still assumes access to labeled target samples, which may not be viable in practical deployment scenarios. As we show in Sec. 4, these methods yield suboptimal results in the presence of limited labeled data. Nevertheless, most CI methods are not geared to tackle domain-shift. Thus, the assumption that the source-model is proficient in classifying samples in , will not hold good for the target domain. To the best of our knowledge, the most closely related CI work is 
that uses a reinforcement-learning based framework to select source samples during one-shot learning. However, assumes non-overlapping label sets (), and does not consider the effect of negative transfer during model upgrade.
Why do we need CIDA? Prior arts independently address the problem of class-incremental learning and unsupervised adaptation in seperate contexts, by employing learning procedures specific to the problem at hand. As a result of this specificity, they are not equipped to address practical scenarios (such as the self-driving car example in Sec. 1). Acknowledging their limitations, we propose CIDA where the focus is to improve the performance on the target domain to achieve class-incremental recognition in the presence of domain-shift. This makes CIDA more practical and more challenging than the available DA paradigms.
What do we assume in CIDA? To realize a concrete solution, we make the following assumptions that are within the bounds of a practical DA setup. Firstly, considering that the labeled source dataset may not be readily available to perform a model upgrade, we consider the adaptation step to be source-free. Accordingly, we propose an effective source-model training strategy which allows source-free adaptation to be implemented in practice. Secondly, as the target domain may be label-deficient, we pose CIDA as an Unsupervised DA problem wherein the target samples are unlabeled. However, conceding that it may be impractical to discover semantics for unseen target classes in a completely unsupervised fashion, we assume that we can obtain a single labeled target sample for each target-private class (one-shot target-private samples). This can be perceived as the knowledge of new target classes that must be added during the model upgrade. Finally, the overarching objective in CIDA is to improve the performance in the target domain while the performance on the source domain remains secondary.
The assumptions stated above can be interpreted as follows. In CIDA, we first quantify the upgrade that is to be performed. We identify “what domain-shift is to be tackled?” by collecting unlabeled target domain samples, and determine “what new classes are to be added?” by obtaining one-shot target-private samples. This deterministic quantification makes CIDA different from UDA and CI methods, and enhances the reliability of a source-free adaptation algorithm. In the next section, we formalize CIDA and describe our approach to solve the problem.
3 Class-Incremental Domain Adaptation
Let and be the input and the label spaces. The source and the target domains are characterized by the distributions and on . We denote the set of labeled source samples as with label set and the set of unlabeled target samples as with label-set , where denotes the marginal input distribution and . The set of target-private classes is denoted as . See Suppl. for a notation table. To perform class-incremental upgrade, we are given one target sample from each target-private category (one-shot target-private samples). Further, we assume that source samples are unavailable during model upgrade [23, 24, 28]. Thus, the goal is to train a model on the source domain, and later, upgrade the model (address domain-shift and learn new classes) for the target domain. Accordingly, we formalize a two-stage approach as follows,
Foresighted source-model training. It is imperative that a source-trained model supports source-free adaptation. Thus, during source training, we aim to suppress the domain and category bias  that culminates from overconfident class-predictions. Specifically, we augment the model with the capability of out-of-distribution  detection. This step is inspired by prototypical networks that have a simpler inductive bias in the limited data regime . Finally, the source-model is shipped along with prototypes as meta-data, for performing a future source-free upgrade.
Class-Incremental DA. During CIDA, we aim to align the target samples from shared classes with the high-source-density regions in the latent space, allowing the reuse of the source classifier. Further, we must accommodate new target classes in the latent space while preserving the semantic granularity. We achieve both these objectives by learning a target-specific latent space in which we obtain learnable centroids called guides that are used to gradually steer the target features into separate clusters. We theoretically argue and empirically verify that this enables a suitable ground for CIDA.
3.1 Foresighted source-model training
The architecture for the source model contains a feature extractor and a -class classifier (see Fig. 2A). We denote the latent-space by . A naive approach to train the source-model would be using the cross-entropy loss,
where, denotes composition. However, enforcing alone biases the model towards source domain characteristics. As a result, the model learns highly discriminative features and mis-classifies out-of-distribution samples into one of the learned categories () with high confidence . For e.g.
, an MNIST image classifier is shown to yield a predicted class-probability of 91on random input . We argue that such effects are due to the domain and category bias culminating from the overconfident predictions. Thus, we aim to suppress this bias in the presence of the source samples for a reliable source-free upgrade.
We note two requirements for a source-trained model suitable for CIDA. First, we must penalize overconfident predictions  which is a crucial step to enable generalization over unseen target categories. This will aid in mitigating the effect of negative-transfer (discussed in Sec. 2). Second, we aim for source-free adaptation in CIDA, which calls for an alternative to source samples. We satisfy both these requirements using class-specific gaussian prototypes [9, 48] as follows.
a) Gaussian Prototypes. We define a Gaussian Prototype for a class as where and are the mean and the covariance obtained over the features for samples in class . In other words, a Gaussian Prototype is a multivariate Gaussian prior defined for each class in the latent space . Similarly, a global Gaussian Prototype is defined as , where and are calculated over the features for all source samples . We hypothesize that at the -space, we can approximate the class semantics using these Gaussian priors which can be leveraged for source-free adaptation.
To ensure that this Gaussian approximation is accurate, we explicitly enforce the source features to attain a higher affinity towards these class-specific Gaussian priors. We refer to this as the class separability objective defined as,
where , and the term inside the
is the posterior probability of a featurecorresponding to its class (obtained as the softmax over likelihoods ). In effect, drives the latent space to form well-separated, compact clusters for each class . We verify in Sec. 4 that compact clusters enhance the reliability of a source-free model upgrade, where the clusters must rearrange to attain a semantic granularity suitable for the target domain.
b) Negative Training. While enforces well-separated feature clusters, it does not ensure tight decision boundaries, without which the classifier misclassifies out-of-distribution (OOD) samples  with high confidence. This overconfidence issue must be resolved to effectively learn new target categories. Certain prior works 
suggest that a Gaussian Mixture Model based likelihood threshold could effectively detect OOD samples. We argue that additionally, the classifiershould also be capable of assigning a low confidence to OOD samples , forming tight decision boundaries around the source clusters (as in Fig. 2A).
We leverage the Gaussian Prototypes to generate negative feature samples to model the low-source-density (OOD) region. The negative samples are denoted as where is the distribution of the OOD regime. More specifically, we obtain the samples from the global Gaussian Prototype which are beyond 3-confidence interval of all class-specific Gaussian Prototypes (see Suppl. for an algorithm). These negative samples correspond to the category and the classifier is trained to assign a low confidence to such samples (see Fig. 2A). Thus, the cross-entropy loss in Eq. 2 is modified as:
By virtue of , the classifier assigns a high source-class confidence to samples in , and a low source-class confidence to samples in . Thus, learns compact decision boundaries (as shown in Fig. 2A).
c) Optimization. We train via alternate minimization of and using Adam  optimizers (see Suppl.). Effectively, the total loss enforces the Cluster Assumption at the -space (via ) that enhances the model’s generalizability [6, 14], and, mitigates the overconfidence issue (via ) thereby reducing the discriminative bias towards the source domain. We update the Gaussian Prototypes and the negative
samples at the end of each epoch. Once trained, the source-model is ready to be shipped along with the Gaussian Prototypes as meta-data. Note, in contrast to source data, Gaussian Prototypes are cheap and can be readily shared (similar to BatchNorm statistics).
3.2 Class-Incremental DA on the Target Domain
Following the CIDA paradigm during the model upgrade, we have access to a source model and its meta-data (Gaussian Prototypes ), unlabeled target samples , and one-shot target-private samples . We now formalize an approach that tightens the target risk bound (Eq. 1) exploiting a foresighted source-model trained using . Recall that the bound comprises of three terms - source risk (), distribution shift () and the constant .
a) Learning target features. A popular strategy for UDA is to learn domain-agnostic features [29, 45, 47]. However, as argued in Sec. 2, in CIDA we must learn a target-specific latent space (annotated as in Fig. 2B) which attains a semantic granularity suitable for the target domain. To this end, we introduce a target-specific feature extractor that is initialized from . Informally, this process “initializes the -space from the -space”. Thereafter, we gradually rearrange the feature clusters in the -space to learn suitable target semantics. To receive stable gradients, is kept frozen throughout adaptation. Further, we introduce a classifier to learn the target-private categories (see Fig. 2B).
b) Domain projection. The key to effectively learn target-specific semantics is to establish a transit mechanism between the -space (capturing the semantics of the learned classes ) and the -space (where must be learned). We address this using the domain projection networks and . Specifically, we obtain feature samples from the Gaussian Prototypes for each class (called as proxy-source samples). Thereafter, we formalize the following losses to minimize the source risk ( in Eq. 1) during adaptation,
where is the euclidean distance and the output is the concatenation (Fig. 2B
) of logits pertaining to() and those of (). The total loss acts as a regularizer, where preserves the semantics of the learned classes in the -space, while prevents degenerate solutions. In Sec. 4, we show that mitigates catastrophic forgetting  (by minimizing in Eq. 1) that would otherwise occur in a source-free scenario.
c) Semantic alignment using guides. We aim to align target samples from shared classes with the high source-density region (proxy-source samples) and disperse the target-private samples away into the low source-density region (i.e. the negative regime). Note, as the source model was trained on augmented with , this process would entail the minimization of (Eq. 1) measured between the target and the augmented source distributions in the -space.
To achieve this, we obtain a set of guides () that act as representative centers for each class in the -space. We model the euclidean distance to a guide as a measure of class confidence, using which we can assign a pseudo class-label  to the target samples. These pseudo-labels can be leveraged to rearrange the target features into separate compact clusters (Fig. 3B-F). Note that (class separability objective) enforced during the source training is crucial to improve the reliability of the guides during adaptation.
We consider the features of the one-shot target-private samples as the guides for the target-private classes. Further, since is initialized from , one might consider the source class-means as the guides for the shared classes. However, we found that a fixed guide representation (e.g. ) hinders the placement of target-private classes. Thus, we obtain trainable guides for the shared classes as , by allowing to modify the placement of the guides in the -space (Fig. 3). This allows all the guides to rearrange and steer the target clusters in the -space as the training proceeds. To summarize, the guides are computed as , and, .
To minimize (Eq. 1), we must first detect the target-shared and target-private samples and then perform feature alignment. To this end, for a target feature , we obtain the euclidean distance to its nearest guide, and assign a pseudo-label corresponding to the class represented by the guide as, , and, .
Using pseudo-labeled samples we obtain Gaussian Prototypes in the -space (as done in Sec. 3.1a), and enforce the class separability objective. Further, for each guide (), we define a set of the closest -percent target samples based on the distance (see Suppl. for the algorithm). Notionally, represents the confident target samples which are then pulled closer to . These two losses are defined as,
The total adaptation loss is . Overall, pulls the target-shared samples towards the high source-density region and separates the target-private clusters away from the high source-density regions (Fig. 3B-E). This results in a superior alignment thereby minimizing . Particularly, the separation caused by minimizes the negative influence of target-private samples during adaptation, thereby preventing negative transfer . ensures compact feature clusters which aids in preserving the semantic granularity across the target classes.
d) Learning target-private classes. Finally, to learn new target classes, we apply cross-entropy loss on the confident target samples () as,
where the output is obtained similar to that in Eq. 5, by concatenating the logits and . We verify in Suppl. that the precision of pseudo-labels for target samples in is high. Thus, the loss along with can be viewed as conditioning the classifier to deliver a performance close to that of the optimal joint classifier (with the minimal risk ).
e) Optimization. We pre-train to a near-identity function with the losses and , where and is the euclidean distance (similar to an auto-encoder). The total loss employed is , which tightens the bound in Eq. 1 as argued above, yielding a superior adaptation guarantee. Instead of directly enforcing at each iteration, we alternatively optimize each loss using separate Adam  optimizers in a round robin fashion (i.e. we cycle through the losses
and minimize a single loss at each iteration). Since each optimizer minimizes its corresponding loss function independently, the gradients pertaining to each loss are adaptively scaled via the higher order moments
. This allows us to avoid hyperparameter search for loss scaling. See Suppl. for the training algorithm.
We conduct experiments on three datasets. Office  is the most popular benchmark containing 31 classes across 3 domains - Amazon (A), DSLR (D) and Webcam (W). VisDA  contains 12 classes with 2 domains - Synthetic (Sy) and Real (Re) with a large domain-shift. Digits dataset is composed of MNIST (M), SVHN (S) and USPS (U) domains. See Suppl. for label-set details.
a) Evaluation. We consider two setups for target-private samples - i) one-shot, and, ii) few-shot (5% labeled). In both cases, we report the mean target accuracy over (ALL) and (PRIV), over 5 separate runs (with randomly chosen one-shot and few-shot samples). We compare against prior UDA methods DANN , OSBP , UAN , STA , and CI methods E2E , LETR , iCaRL , LwF-MC , LwM . To evaluate UDA methods in CIDA, we collect the target samples predicted as unknown after adaptation. We annotate a few of these samples following the few-shot setting, and train a separate target-private classifier (TPC) similar in architecture to . At test time, a target sample is first classified into , and if predicted as unknown, it is further classified by the target-private classifier. We evaluate the prior arts only in the few-shot setting since they require labeled samples for reliable model upgrade.
b) Implementation. See Suppl. for the architectural details and an overview of the training algorithms for each stage. A learning rate of is used for the Adam  optimizers. For the source-model training, we use equal number of source and negative samples per batch. For adaptation, we set for confident samples. At test time, the prediction for a target sample is obtained as over the logits pertaining to () and ().
a) Baseline Comparisons. To empirically verify the effectiveness of our approach, we implement the following baselines. See Suppl. for illustrations of the architectures. The results are summarized in Table 2.
i) Ours-a: To corroborate the need for a target-specific feature space, we remove , and discard the loss . Here, the -space is common to both the target and the proxy-source samples. Thus, the guides for the shared classes are the fixed class-means (), and the only trainable components are and . In doing so, we force the target classes to acquire the semantics of the source domain which hinders the placement of target-private classes and degrades the target-private accuracy. However, in our approach (Ours), trainable guides allow the rearrangement of features which effectively minimizes the (in Eq. 1).
ii) Ours-b: To study the regularization of the sampled proxy-source features, we modify our approach by removing . We observe a consistent degradation in performance resulting from a lower target-shared accuracy. This verifies the role of in mitigating catastrophic forgetting (i.e. by minimizing in Eq. 1).
iii) Ours-c: We modify our approach by removing that produces compact target clusters. We find that the target-private accuracy decreases, verifying the need for compact clusters to preserve the semantic granularity across the target classes. Note, Ours-c (having trainable guides for ) outperforms Ours-a (having frozen guides for ), even in the absence of .
iv) Ours-d: To establish the reliability of the Gaussian Prototypes, we perform CIDA using the source dataset, i.e. using the features instead of the sampled proxy-source features. The performance is similar to Ours, confirming the efficacy of the Gaussian Prototypes in modelling the source distribution. This is owed to that enhances the reliability of the Gaussian approximation.
|Method||Digits ()||VisDA ()|
Using few-shot target-private samples (5% labeled)
Using one-shot target-private samples
b) Comparison against prior arts. We compare against prior UDA and CI approaches in Table 3. Further, we run a variation of our approach with few-shot (5% labeled) target-private samples (Ours*), where the guides for are obtained as the class-wise mean features of the few-shot samples.
UDA methods exploit unlabeled target samples but require access to labeled source samples during adaptation. They achieve a low target-private (PRIV) accuracy owing to the loss of semantic granularity. This effect is evident in open-set methods, where target-private samples are forced to be clustered into a single unknown class. However in DANN and UAN, such a criterion is not enforced, instead a target sample is detected as unknown using confidence thresholding. Thus, DANN and UAN achieve a higher PRIV accuracy than STA and OSBP.
The performance of most CI methods in CIDA is limited due to the inability to address domain-shift. LETR, E2E and iCaRL require labeled samples from both the domains during the model upgrade. E2E exploits these labeled samples to re-train the source-trained model with all classes (). However, the need to generalize across two domains degrades the performance on the target domain where target-shared samples are unlabeled. In contrast, LwM and LwF-MC learn a separate target model, by employing a distillation loss using the target samples. However, distillation is not suitable under a domain-shift since the source model is biased towards the source domain characteristics that cannot be generalized for the target domain. In LETR, the global domain statistics across the two domains are aligned. However, such a global alignment is prone to the negative influence of target-private samples which limits its performance.
Our method addresses these limitations and outperforms both UDA and CI methods. The foresighted source-model training suppresses domain and category bias by addressing the overconfidence issue. Then, a gradual rearrangement of features in a target-specific semantic space allows the learning of target-private classes while preserving the semantic granularity. Furthermore, the regularization from the proxy-source samples mitigates catastrophic forgetting. Thus our approach achieves a more stable performance in CIDA, even in the challenging source-free scenario. See Suppl. for a discussion from the theoretical perspective.
c) Effect of class separability objective. We run an ablation on the AD task (Table 3) without enforcing during the source-model training. The accuracy post adaptation is 68.6% (PRIV = 70.4%) as compared to (PRIV = 72.6%) in Ours. This suggests that the class separability objective (enforcing the Cluster Assumption) helps in generalization to the target domain.
d) Effect of negative training. On the AD task (Table 3), a source-model trained with negative samples () achieves a source accuracy of 96.7%, while that trained without negative samples yields 96.9%. Thus, there is no significant drop on the source performance due to negative training. However, this aids in generalizing the model to novel target classes. Specifically, a source-model trained with negative samples (Ours) yields 72.2% (PRIV = 72.6%) after adaptation, while that without negative training achieves 67.4% (PRIV = 62.3%) after adaptation. The performance gain in Ours is attributed to the mitigation of the overconfidence issue thereby reliably classifying target-private samples.
e) Sensitivity to hyperparameters. In Fig. 4, we plot the target accuracy post adaptation for various hyperparameter values for the task AD. Empirically, we found that a 3- confidence interval for negative sampling was most effective in capturing the source distribution (Fig. 4A). We choose an equal number of source () and negative () samples in a batch during source training to avoid the bias caused by imbalanced data. Fig. 4C shows the sensitivity to the batch size ratio . Further, the hyperparameter is marginally stable around (Fig. 4D) which was used across all experiments. Finally, the trend in Fig. 4B is a result of the challenging one-shot setting.
f) Two-step model upgrade. We extend our approach to perform two-step model upgrade under CIDA on Office (See Suppl. for details). First a source model () is trained on the 10 classes of Amazon (A) which is upgraded to the 20 classes of DSLR (D) thereby learning . We upgrade this DSLR-specific model to the Webcam (W) domain, having 20 classes shared with (A+D), and 11 new classes. This is done by learning feature extractor , classifier , and domain projection networks learned between the latent spaces of and . We observe an accuracy of 79.9% on W, which is close to that obtained by directly adapting from 20 classes of DSLR to 31 classes in Webcam (80.3%, Table 2). This corroborates the practical applicability of our approach to multi-step model upgrades. See Suppl. for a detailed discussion.
We proposed a novel Domain Adaptation paradigm (CIDA) addressing class-incremental learning in the presence of domain-shift. We studied the limitations of prior approaches in the CIDA paradigm and proposed a two-stage approach to address CIDA. We presented a foresighted source-model training that facilitates a source-free model upgrade. Further, we demonstrated the efficacy of a target-specific semantic space, learned using trainable guides, that preserves the semantic granularity across the target classes. Finally, our approach shows promising results on multi-step model upgrades. As a future work, the framework can be extended to a scenario where a series of domain-shifts and task-shifts are observed.
Acknowledgement. This work is supported by a Wipro PhD Fellowship and a grant from Uchhatar Avishkar Yojana (UAY, IISC_010), MHRD, Govt. of India.
See pages 1-1 of 1769-supp_compressed.pdf See pages 2-2 of 1769-supp_compressed.pdf See pages 3-3 of 1769-supp_compressed.pdf See pages 4-4 of 1769-supp_compressed.pdf See pages 5-5 of 1769-supp_compressed.pdf See pages 6-6 of 1769-supp_compressed.pdf See pages 7-7 of 1769-supp_compressed.pdf See pages 8-8 of 1769-supp_compressed.pdf See pages 9-9 of 1769-supp_compressed.pdf See pages 10-10 of 1769-supp_compressed.pdf See pages 11-11 of 1769-supp_compressed.pdf See pages 12-12 of 1769-supp_compressed.pdf See pages 13-13 of 1769-supp_compressed.pdf See pages 14-14 of 1769-supp_compressed.pdf See pages 15-15 of 1769-supp_compressed.pdf See pages 16-16 of 1769-supp_compressed.pdf See pages 17-17 of 1769-supp_compressed.pdf See pages 18-18 of 1769-supp_compressed.pdf See pages 19-19 of 1769-supp_compressed.pdf
-  (2019) Learning factorized representations for open-set domain adaptation. In ICLR, Cited by: §1.
-  (2010) A theory of learning from different domains. Machine learning 79 (1-2), pp. 151–175. Cited by: §1, §2.
-  (2007) Analysis of representations for domain adaptation. In NeurIPS, Cited by: §1.
-  (2018) End-to-end incremental learning. In ECCV, Cited by: Table 1, §1, §2, §4.
Domain-specific batch normalization for unsupervised domain adaptation. In CVPR, Cited by: §1.
-  (2005) Semi-supervised classification by low density separation.. In AISTATS, Cited by: §3.1.
-  (2019) Learning without memorizing. In CVPR, Cited by: Table 1, §1, §1, §1, §2, §4.
-  (2018) Domain adaption in one-shot learning. In ECML-PKDD, Cited by: §2.
-  (2017) Gaussian prototypical networks for few-shot learning on omniglot. arXiv preprint arXiv:1708.02735. Cited by: §3.1.
Unsupervised domain adaptation by backpropagation. In ICML, Cited by: §1.
Domain-adversarial training of neural networks. JMLR 17 (1), pp. 2096–2030. Cited by: Table 1, §2, §4.
-  (2012) Geodesic flow kernel for unsupervised domain adaptation. In CVPR, Cited by: §1.
-  (2013) An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Cited by: §3.2.
-  (2005) Semi-supervised learning by entropy minimization. In NeurIPS, Cited by: §3.1.
Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In ICCV, Cited by: §1.
-  (2017) A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, Cited by: §3.1.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §3.1.
-  (2012) Undoing the damage of dataset bias. In ECCV, Cited by: §1, item 1.
-  (2014) Adam: a method for stochastic optimization. In ICLR, Cited by: §3.1, §3.2, §4.
-  (2019) GAN-Tree: an incrementally learned hierarchical generative framework for multi-modal data distributions. In ICCV, Cited by: §2.
-  (2019) UM-Adapt: unsupervised multi-task adaptation using adversarial cross-task distillation. In ICCV, Cited by: §1.
Adadepth: unsupervised content congruent adaptation for depth estimation. In CVPR, Cited by: §1.
-  (2020) Universal source-free domain adaptation. In CVPR, Cited by: §1, §1, §3.
-  (2020) Towards inheritable models for open-set domain adaptation. In CVPR, Cited by: §1, §1, §3.1, §3.
-  (2019) Unsupervised domain adaptation based on source-guided discrepancy. In AAAI, Cited by: §1, §2.
-  (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning at ICML, Cited by: §3.2.
-  (2018) Training confidence-calibrated classifiers for detecting out-of-distribution samples. In ICLR, Cited by: item 1, §3.1.
-  (2017) Learning without forgetting. TPAMI 40 (12), pp. 2935–2947. Cited by: §1, §1, §1, §2, §3.
-  (2019) Separate to adapt: open set domain adaptation via progressive separation. In CVPR, Cited by: Table 1, §2, §2, §3.2, §3.2, §4.
-  (2015) Learning transferable features with deep adaptation networks. ICML. Cited by: §2.
-  (2016) Unsupervised domain adaptation with residual transfer networks. In NeurIPS, Cited by: §1.
-  (2017) Data-free knowledge distillation for deep neural networks. In LLD Workshop at NeurIPS, Cited by: §1.
-  (2017) Label efficient learning of transferable representations across domains and tasks. In NeurIPS, Cited by: Table 1, §2, §4.
-  (2019) Zero-shot knowledge distillation in deep networks. In ICML, Cited by: §1.
A survey on transfer learning. IEEE Transactions on knowledge and data engineering. Cited by: §2.
-  (2017) Open set domain adaptation. In ICCV, Cited by: §1, §1.
-  (2018) Multi-adversarial domain adaptation. In AAAI, Cited by: §1.
-  (2017) Incrementally learning the hierarchical softmax function for neural language models. In AAAI, Cited by: §1.
-  (2017) VisDA: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924. Cited by: §4.
-  (2017) Regularizing neural networks by penalizing confident output distributions. In ICLR, Cited by: §3.1.
-  (2017) Icarl: incremental classifier and representation learning. In CVPR, Cited by: Table 1, §1, §2, §4.
Incremental learning with support vector machines. In ICDM, Cited by: §1.
-  (2010) Adapting visual category models to new domains. In ECCV, Cited by: §1, §2, §4.
-  (2018) Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, Cited by: §1.
-  (2018) Open set domain adaptation by backpropagation. In ECCV, Cited by: Table 1, §1, §3.2, §4.
Generate to adapt: aligning domains using generative adversarial networks. In CVPR, Cited by: §2.
-  (2019) Transferable curriculum for weakly-supervised domain adaptation. In AAAI, Cited by: §3.2.
-  (2017) Prototypical networks for few-shot learning. In NeurIPS, Cited by: §1, item 1, §3.1.
-  (2016) Deep coral: correlation alignment for deep domain adaptation. In ECCV Workshops, Cited by: §2.
-  (2011) Unbiased look at dataset bias. In CVPR, Cited by: §1.
-  (2017) Adversarial discriminative domain adaptation. In CVPR, Cited by: §1.
-  (2014) Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474. Cited by: §1, §2.
-  (2019) Large scale incremental learning. In CVPR, Cited by: §1.
-  (2019) Universal domain adaptation. In CVPR, Cited by: Table 1, §1, §2, §4.
-  (2018) Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks. In NeurIPS, Cited by: §3.1.