Domain Adaptation (DA) methods aim to train a target-domain classifier with data in source and target domains [lu2015transfer]. Based on the variety of data in the target domain (i.e., fully-labeled, partially-labeled, and unlabeled), DA consists of three categories: supervised DA [motiian2017unified, zuo2018fuzzy01, zuo2017granular], semi-supervised DA [pereira2018semi, saito2019semi, zuo2018fuzzy02], and unsupervised DA (UDA) [liu2017heterogeneous, fang2019unsupervised]. In practice, UDA methods have been deployed to solve diverse real-world problems, such as object recognition [gopalan2011domain, kan2014domain], cross-domain recommendation [zhang2017cross]
, and sentiment analysis[liu2020heterogeneous].
There are two common settings in UDA: unsupervised closed set domain adaptation (UCSDA) and unsupervised open set domain adaptation (UOSDA). UCSDA is a classical scenario in which source and target domains share the same label sets. By contrast, in UOSDA, the target domain contains some unknown classes that are not observed in the source domain, and the data with unknown classes are called unknown target data. In Fig. 1, the source domain contains four known classes (i.e., monitor, mug, staple, and calculator), but the target domain contains some unknown classes in addition to the classes in the source domain.
UOSDA is more general than UCSDA, since the label sets are usually not consistent between source and target domains in a real-world scenario. Namely, the target domain may contain classes that are not observed in the source domain. For example, a classifier trained with images of various kinds of cats is likely to encounter the image of a dog or another animal in reality. In this case, the UCSDA methods are unable to distinguish the unseen animals (i.e., unknown classes). UOSDA methods, however, can establish a boundary between known classes and unknown classes.
Panareda et al. [panareda2017open] are the first to propose the setting of UOSDA, but the source domain also contains some unknown classes in Panareda’s paper. Since it is expensive and prohibitive to obtain data labeled by unknown classes in the source domain, Saito et al. [saito2018open] propose a new UOSDA setting where the source domain only contains known classes. In this paper, we focus on the same setting as Saito’s paper, which is more realistic [saito2018open, fang2019open].
In UOSDA, we aim to train a target-domain classifier with labeled data in the source domain and unlabeled data in the target domain. The trained classifier is expected to accurately 1) recognize unknown target data, and 2) classify other target data. Existing UOSDA methods can be divided into two groups: shallow methods and deep methods. For shallow methods, a recent work [fang2019open] proved an upper bound of target-domain risk, which can provide a theoretical guarantee for the design of a shallow UOSDA method. For deep methods, since [long2013transfer, yosinski2014transferable, DBLP:conf/icml/DonahueJVHZTD14] have shown that DNNs can learn more transferable features, researchers presented DNNs-based methods to address the UOSDA problem [saito2018open, feng2019attract, liu2019separate]. Nevertheless, these deep UOSDA methods lack theoretical guarantees. Thus, how to bridge theoretical bound and deep algorithms is both necessary and important for addressing the UOSDA problem.
In order to train an effective target-domain classifier, Zhen et al. [fang2019open] have proven an upper bound of target-domain risk (Eq. (14)) for the UOSDA problem and propose a shallow UOSDA method. Specifically, the bound consists of four terms: source-domain risk, distributional discrepancy between domains, open set difference (), and a constant. Open set difference, as an important term in upper bound, is leveraged to measure the risk of a classifier on unknown target data. The shallow method in [fang2019open]
trains a target-domain classifier by minimizing the empirical estimation of the upper bound.
However, the theoretical bound presented in [fang2019open] is not adaptable to flexible classifiers (i.e., deep neural networks (DNNs)). In Fig. 2, we show that if the classifier is a DNN, the accuracy (OS in Fig. 2 (b)) in the target domain will drop significantly (yellow line in Fig. 2 (b)) when minimizing the empirical estimates of the upper bound. This phenomenon confirms that we cannot simply combine the existing theoretical bound and deep algorithms to address the UOSDA problem.
To reveal the nature of this phenomenon, we investigate that the lower bound of the distributional discrepancy is the negative value of open set difference. Since DNNs are very flexible and the empirical open set difference can be a negative value, empirical open set difference will be quickly minimized to a very negative value (yellow line in Fig. 2 (a)). Based on the lower bound of the distributional discrepancy, if the empirical open set difference is a very small negative number, the distributional discrepancy is greater than a very large positive number. Consequently, we fail to align the distributions of the two domains, resulting in a very low accuracy on the target domain (yellow line in Fig. 2 (b)).
In this paper, we propose a new upper bound of target-domain risk for UOSDA (Eq. (20)), which includes four terms: source-domain risk, -open set difference (), conditional distributional discrepancy between domains, and a constant. is the lower bound of open set difference and we construct a new risk estimator that limits the descent of the open set difference by . can ensure the promptly prevention of the lower bound of the distributional discrepancy between two domains from significantly increasing. Fig. 2 shows that minimizing the empirical estimates of the new upper bound achieves higher accuracy (green line in Fig. 2(b)).
Then, we propose a new principle-guided deep UOSDA method that trains DNNs via minimizing empirical estimates of the new upper bound. The network structure is shown in Fig. 3. We employ a generator () to extract the feature of input data, a classifier () to classify input data, and a domain discriminator () to assist distribution alignment. The overall object function consists of source classification loss, binary adversarial loss, domain adversarial loss, and empirical . Specifically, the source classification loss and empirical are minimized by gradient descent, and a gradient reverse layer is adopted for adversarial losses.
To effectively align distributions between data with known classes, we propose a novel open-set conditional adversarial training strategy based on the tensor product between the feature representation and the label prediction to capture the multimodal structure of distribution. According to[song2009hilbert, long2018conditional], it is significant to capture the multimodal structures of distributions using cross-covariance dependency between the features and classes. However, existing deep UOSDA methods align distributions by either the binary adversarial net [saito2018open, feng2019attract] or the multi-binary classifier [liu2019separate], which is not adequate for distributions with multimodal structure. Furthermore, this novel training strategy also pushes unknown target data away from data with known classes via . As shown in Fig. 2 (b), the novel distribution alignment strategy can further boost the performance of the classifier.
To validate the efficacy of the proposed method, we conduct extensive experiments on several standard benchmark datasets containing transfer tasks. Compared to existing shallow and deep UOSDA methods, our method shows state-of-the-art performance on digit recognition (MNIST, SVHN, USPS), object recognition (Office-31, Office-Home) and face recognition (PIE). The main contributions of this paper are:
A new theoretical bound of target-domain risk for UOSDA is proposed. It is essential since the existing bound does not apply to flexible classifiers (i.e., DNNs). Thus this work can bridge the gap between the existing theoretical bound and deep algorithms for the UOSDA problem.
A UOSDA method based on DNNs is proposed under the guidance of the proposed theoretical bound. The method can better estimate the risk of the classifier on unknown data than existing deep methods with the theoretical guarantee.
A novel open-set conditional adversarial training strategy is proposed to ensure that our method can align the distributions of two domains better than existing UOSDA methods.
Experiments on Digits, Office-31, Office-Home, and PIE show that the accuracy of the OS of our method significantly outperforms all baselines, which shows that our method achieves state-of-the-art performance.
This paper is organized as follows. Section II reviews the works related to UCSDA, open set recognition, and UOSDA. Section III introduces the definitions of notations and our problem. Section IV demonstrates the motivation of this paper. Theoretical results and the proposed method are shown in Section V. Experimental results and analyses are provided in Section VI. Finally, Section VII concludes this paper.
Ii Related Work
Unsupervised open set domain adaptation is a combination of unsupervised closed set domain adaptation and open set recognition. In this section, we present a systematic review of related studies.
Ii-a Closed Set Domain Adaption
In [ben2007analysis], a theoretical bound for UCSDA is given, which indicates that minimizing the source risk and distributional discrepancy is the key to the UCSDA problem. Based on this point, there are two kinds of methods for UCSDA: one is to employ a distributional discrepancy measurer to measure the domain gap [pan2010domain]; the other is the adversarial training strategy [long2018conditional].
Transfer Component Analysis (TCA) [pan2010domain] utilizes MMD [gretton2012kernel]
learning a domain invariant feature by aligning marginal distribution. Meanwhile, Joint Distribution Adaptation (JDA)[long2013transfer] align marginal distribution and conditional distribution simultaneously. In order to simplify the training of a classifier, Easy Transfer Learning (EasyTL) [wang2019easy] exploits the intra-domain information to get a non-parametric feature and the classifier. CORrelation Alignment (CORAL) [sun2016return] aligns second-order statistics of source and target domain to minimize domain divergence. Manifold Embedded Distribution Alignment (MEDA) [wang2018visual] performs a dynamic distribution alignment in a Grassmann manifold subspace.
Meanwhile, deep neural networks have also been introduced into domain adaptation and achieved competitive performance in UCSDA. Deep Adaptation Networks (DAN) [long2015learning] employs the multi-kernel MMD (MK-MMD) to align the feature of 6-8 layers in Alexnet. Deep CORAL Correlation is the extension of shallow method CORAL in deep neural networks. Wasserstein Distance Guided Representation Learning (WDGRL) [shen2018wasserstein] employs the Wasserstein distance to learn an invariant representation in deep neural networks.
Representative adversarial-training-based method are Domain-Adversarial Training of Neural Networks (DANN) [ganin2016domain] and conditional adversarial domain adaptation (CDAN) [long2018conditional]. DANN employs a domain discriminator to recognize which domain data comes from and deceives the domain discriminator by changing features so that an invariant representation can be learned during the adversarial procession. Furthermore, CDAN utilizes the tensor product between feature and classifier prediction to grasp the multimodal information and an entropy condition to control the uncertainty of the classifier. However, these methods can only cope with the UCSDA problem and are unable to address the UOSDA problem.
Ii-B Open Set Recognition
This setting allows some unknown classes to be shown in the target domain, but there is no distributional discrepancy between domains. Open Set SVM [jain2014multi] rejects the unknown classes via a fixed threshold. Open Set Nearest Neighbor (OSNN) [junior2017nearest] extends the Nearest Neighbor to recognize unknown classes. Bendale et al. [bendale2016towards]
introduces a layer named OpenMAX to estimate the probability that an input data is recognized as unknown classes in DNNs. However, these methods do not consider distributional discrepancy. They are also unable to address the UOSDA problem.
Ii-C Open Set Domain Adaptation
Busto et al. [panareda2017open]
were the first to propose the setting of UOSDA. They employed a method named Assign-and-Transform-Iterately (ATI) to assign labels to target data using a distance matrix between target data and source class centers and aligned distributions through a mapping matrix. In the setting of this paper, however, the source domain contains some unknown classes to assist the classifier to recognize unknown data. Since obtaining unknown samples of the source domain is expensive and time-consuming, Open Set Backpropagation (OSBP)[saito2018open] assumes a more realistic scenario that the source domain has no unknown classes, which is more challenging. An adversarial network is used to recognize unknown samples and align distribution during backpropagation.
Based on OSBP, Feng et al. [feng2019attract] proposed a method named SCI_SCM, which utilizes semantic structure among data to align the distribution of known classes and push unknown classes away from known classes. Separate to Adapt (STA) [liu2019separate] utilizes a coarse-to-fine weight mechanism to separate unknown samples from the target domain. In Distribution Alignment with Open Difference (DAOD) [fang2019open], a theoretical bound is proposed for UOSDA and a risk estimator is used to recognize unknown target data.
However, existing deep UOSDA methods lack the theoretical guidance and the upper bound in [fang2019open] is not applicable to DNNs, which causes a large distributional discrepancy (details are shown in Section IV). Obviously, for UOSDA, there is a gap between existing theoretical bound and deep algorithms. In this paper, we aim to fill this gap.
Iii Preliminary and Notations
The definitions of the UOSDA problem and some important concepts are introduced in this section. The notations used in this paper are summarized in Table I.
Iii-a Definitions and Problem Setting
Important definitions are presented as follows.
Definition 1 (Domain[fang2019open]).
Given a feature space and a label space , a domain is a joint distribution , where the random variables
, where the random variables, .
In Definition 1, and mean that the spaces and contain the image sets of and respectively. In the paper, we name the random variable
as feature vector and the random variableas label. Based on this definition, we have:
Definition 2 (Domains for Open Set Domain Adaptation[fang2019open]).
Given a feature space and the label spaces , the source and target domains have different joint distributions and , where the random variables , , , and the label space .
From the definitions above, we can notice that: 1) This paper focuses on homogeneous situations. Thus and are belong to the same space, and 2) contains . It is unknown target classes that are the classes from . It is are the known classes that are the classes from . Thus, the UOSDA problem is:
Problem 1 (Unsupervised Open Set Domain Adaptation (UOSDA) [fang2019open]).
Given labeled samples drawn from the joint distribution of the source domain i.i.d and unlabeled samples drawn from the marginal distribution of the target domain i.i.d. The aim of UOSDA is to find a target classifier such that
1) classifies the known target samples into the correct known classes;
2) recognizes the unknown target samples as unknown.
According to the definition of the problem, the target-domain classifier only needs to recognize unknown target data as unknown and classify other target data. It is not necessary to classify unknown target data, and all unknown target data are recognized as the “unknown class”. In general, we assume that , where the label denotes the unknown class and the label is a one-hot vector. The label denotes the -th class.
|feature space||source, target joint distributions|
|source, target label sets||source, target marginal distributions|
|random variables on the feature space||open set difference|
|,||random variables on the label spaces|
|source, target risks||partial risk on known target classes|
|one-hot vector (class )||partial risk on unknown target classes|
|feature transformation , classifier over||risks that samples regarded as unknown|
|hypothesis space, set of classifiers||
class-prior probability for unknown class
|sample from||empirical distribution, empirical risk|
|distance||tensor discrepancy distance|
Iii-B Concepts and Notations
It is necessary to introduce some important concepts and notations before demonstrating our main results. Unless otherwise specified, all the following notations are used consistently throughout this paper without further explanations.
Iii-B1 Notations for distributions
For simplicity, we denote the joint distributions and by the notations and respectively. Similarly, we use and denote the marginal distributions and respectively.
denotes the target conditional distribution for the known classes, while denotes the target conditional distribution for the unknown classes. denotes the class-prior probability for the unknown target classes.
Given a feature transformation:
the induced distributions related to and are
Lastly, the notation denotes the corresponding empirical distribution to any distribution . For example, represents the empirical distribution corresponding to .
Iii-B2 Risks and Partial Risks
In learning theory, risks and partial risks are two important concepts, which are briefly explained below.
Following the notations in [DBLP:conf/icml/0002LLJ19], consider a multi-class classification task with a hypothesis space of the classifiers
be the loss function. For convenience, we also requireto satisfy the following conditions in Theorem 1:
1. is symmetric and satisfies triangle inequality;
2. iff ;
3. if and are one-hot vectors.
We can check many losses satisfying the above conditions such as - loss and loss .
Then the risks of w.r.t. under and are given by
The partial risk of for the known target classes is
and the partial risk of for the unknown target classes is
Lastly, we denote
as the risks that the samples are regarded as the unknown classes.
Given a risk , it is convenient to use notation as the empirical risk that corresponds to .
Iii-B3 Discrepancy Distance
How to measure the difference between domains plays a critical role in domain adaptation. To achieve this, a famous distribution distance has been proposed as the measures of the distribution difference.
Definition 3 (Distributional Discrepancy [DBLP:conf/colt/MansourMR09]).
Given a hypothesis space containing a set of functions defined in a feature space . Let be a loss function, and be distributions on space . The distance between distributions and over is
In this paper, we have used a tighter distance named tensor discrepancy distance, which is firstly proposed by [long2018conditional]. The tensor discrepancy distance can future extract the multimodal structure of distributions to make sure the knowledge related to learned classifier and pseudo labels can be utilized during the distribution aligning process.
We consider the following tensor mapping:
Then we induce two importance distributions:
Using , we reconstruct a new hypothetical set:
where . Then the distance between and is:
where is the sign function.
It is easy to prove that under the conditions (1)-(3) for loss and for any , we have
Iii-B4 Existing Theoretical Bound
Zhen et al. [fang2019open] firstly proposed a theoretical bound for UOSDA:
There are four main terms: source risk, distributional discrepancy, a constant and open set difference. The fourth term, open set difference, is designed to estimate the risk of classifier on unknown data.
In UOSDA, the target-domain classifier aims to accurately recognize unknown target data and classify the other target data. Since the knowledge about unknown classes is missing, the classifier is likely to be confused about the boundary between known and unknown target data. Thus, recognizing unknown target data plays a critical role in addressing the UOSDA problem.
In order to obtain an effective target-domain classifier, Zhen et al. [fang2019open] have proven an upper (Eq. (14)) bound for UOSDA and proposed a shallow method based on the bound. It consists of four terms: source-domain risk, distributional discrepancy, open set difference (), and a constant. Particularly, open set difference, as an important term, is leveraged to estimate the risk of the classifier on unknown target data.
In order to verify whether open set difference works in DNNs, we introduced open set difference into DNNs and conducted a group of experiments on the task Ar Cl in Office-Home. The classifier consists of backbone (ResNet50), generator (two linear layers), and classifier (one linear layer). It is evident that the classifier is very flexible. As shown in Fig. 2, the empirical open set difference converges to a negative value (refer to the yellow line in Fig. 2(a)) and the accuracy of OS, average accuracy among all classes that include unknown classes (Eq. (29)), significantly decreases when empirical open set difference converges to a negative value.
To reveal the nature of this phenomenon, first we investigate the distributional discrepancy and discover that the distributional discrepancy has a lower bound. Specifically, the distributional discrepancy is greater than the negative value of open set difference (Eq. (18)). Based on the lower bound, if the value of the open set difference is a large negative number, then the distributional discrepancy is greater than a large positive number. Hence, we may fail to align the distributional discrepancy. In fact, experiments have shown that the empirical open set difference may converge to a large negative value if we introduce the open set difference into DNNs.
Clearly, there is a gap between existing theoretical bound and DNNs. In order to bridge theoretical bound and deep algorithms, in this paper, we propose a new practical upper bound (Eq. (20)) for UOSDA that applies to DNNs. The term, -open set difference, in the new bound can effectively overcome the defect of open set difference. As shown in Fig. 2, -open set difference guarantees that the risk of the classifier on unknown data is always greater than the lower bound of open set difference by (refer to the green line in Fig. 2(a)). Furthermore, the -open set difference significantly outperforms the open set difference (refer to the green line in Fig. 2(b)).
To sum up, existing upper bound is not compatible with DNNs. That is why we propose a new upper bound that contains an amended risk estimator, -open set difference (). Details of the new upper bound and are shown in Section V.
V The Proposed Method
In this section, we firstly propose a theoretical bound that applies to DNNs for UOSDA. Under the guidance of the bound, we then propose a UOSDA method based on DNNs.
|cross entropy, mean square error loss function|
|set of predicted unknown target data with high confidence|
|set of predicted known target data with high confidence|
|number of source data|
|number of target data|
V-a Theoretical Results
V-A1 An Analysis for Open Set Difference
Eq. (15) is the open set difference:
where and are defined in Eq. (8). The positive term is used to recognize unknown data and the negative term is designed to prevent known data from being classified as unknown classes. By combining these two terms, the classifier can recognize unknown target samples. According to [fang2019open], the open set difference satisfies the following inequality:
The proof of Eq. (16) can be found in Appendix A. proposition 1. Note that
hence, the distributional discrepancy is greater than the negative open set difference:
Theoretically, we hope that the optimized open set difference should not be a large negative value. Otherwise, it is impossible to eliminate the distributional discrepancy. However, in fact, the empirical open set difference may converge to a large negative value (see Fig. 2). This results in that the distributional discrepancy may still be large.
V-A2 -Open Set Difference
Based on the analyses above, we try to correct the open set difference to avoid the problem mentioned above. According to Eq. (18), the open set difference is lower bounded. We denoted the lower bound of the open set difference by . An potentiality is to limit the lower bound of the open set difference by a small negative constant . Hence, we propose an amended risk estimator, -open set difference (), to overcome the existing defect in the open set difference:
If we optimize the empirical -open set difference, we can guarantee that the empirical -open set difference is always larger than . Lastly, combining Eqs. (12), (13) with Eq. (19), we develop a new theoretical bound for UOSDA.
Given a feature transformation , a loss function satisfying conditions 1-3 introduced in Section III-B-2), a nonegative constant and a hypothesis with a mild condition that the constant vector value function , then for any , we have
The proof is given in Appendix A. ∎
It is notable that the theoretical bound introduced in Theorem 1 has two main differences from the learning bound introduced by [fang2019open]. The first one is the -open set difference. As mentioned before, -open set difference is designed to eliminate distributional discrepancy caused by open set difference when the module is based on DNNs. The other difference is that we use the tensor distributional discrepancy to estimate the domain difference. There are two advantages for the tensor distributional discrepancy compared with the distributional discrepancy (Definition 3): 1) the tensor distributional discrepancy is tighter than the distributional discrepancy (see Eq. (13)); 2) the tensor distributional discrepancy can extract the multimodal structure of distributions to make sure the knowledge related to the learned classifier and pseudo labels can be utilized during the process of distribution alignment [long2018conditional].
V-B Method Description
According to Theorem 1, we formally present our method (see Fig. 3), which consists of three parts. Part 1) Binary adversarial domain adaptation. Following [saito2018open], we employ a binary adversarial module to find a rough boundary between the class-known data (known data) and the class-unknown data (unknown data), and thus this module can provide target samples with high confidence for other modules. Part 2) -open set difference (). The is leveraged to estimate the risk of the classifier on unknown data such that the classifier can accurately recognize the unknown target data. Part 3) Conditional adversarial domain adaptation. Existing deep UOSDA methods ignore the importance of the multimodal structure of distribution while aligning distributions for known classes. According to the tensor distributional discrepancy, we design a novel open set conditional adversarial strategy to align distributions for known classes. Notations used in this section are summarized in Table II.
V-B1 Binary adversarial domain adaptation (BADA)
According to our theoretical bound, the first term is source risk. For the source domain, the label is available. We utilize a cross-entropy for the classification of source samples:
For the target domain, it is imperative to recognize the unknown target data before aligning distribution. Following [saito2018open], we employ a binary cross-entropy and a gradient reverse layer between generator and classifier to find a boundary between the known data and the unknown data:
where is the -th value of hypothesis function .
The minimax game is shown in Section V-C. During the process of adversarial training, the classifier attempts to minimize , but the generator attempts to maximize . Therefore, recognizing unknown data is achieved during the process of adversarial training.
However, this module can only find a coarse boundary between the known data and the unknown data, which cannot accurately recognize the unknown target data. Table VI verifies that only binary adversarial domain adaptation cannot achieve satisfactory performance. Therefore, we employ the -open set difference for recognizing unknown target data more appropriately and the open-set conditional adversarial strategy to further align distribution.
V-B2 -open set difference
The principle of the -open set difference () is adequately demonstrated in Sections IV and V-A. Then we introduce to recognize unknown target data. According to Eqs. (19), (23), we can calculate the empirical -open set difference by:
Without more label information, in Eq.(19) is impossible to be evaluated accurately, thus, we introduce a parameter, , to replace it. The analysis of is discussed in Section VI.
V-B3 Conditional adversarial domain adaptation
Here we utilize the tensor distributional discrepancy to align the distribution between the known classes. Firstly, the empirical representations of and can be written as follows:
where is the set of target data from the known classes and is the Dirac measure.
Then, motivated by DANN [ganin2016domain] and CDAN [long2018conditional] , we can reformulate the tensor distributional discrepancy between the known classes as follows:
where is the domain discriminator designed to classify domains.
Since the target data is unlabeled, Eq. (25) cannot be directly calculated. Thanks to the pseudo labels provided by BADA, we leverage it to replace the true label. Since these pseudo labels are not completely accurate, we only select the samples with a confidence of 0.9. We then formulate the domain adversarial loss function below.
where denotes the set of samples from known classes with high confidence in the target domain, and .
Domain adversary loss aims to minimize over and maximize over . The gradient reverse layer between and results in becoming confused about the source data and the target data. The minimax game is shown in Section V-C. The classifier aims to identify what input data belongs to which domain, but the generator aims to deceive the classifier by changing the features of the input data. Distribution alignment can be achieved during this process.
Furthermore, the unknown data may distract distribution alignment of the known data. Thus the unknown data should be pushed away from known data to prevent them from affecting distribution alignment. We construct the loss function below. It is worth noting that there is no gradient reverse between and during the process of backpropagation.
where is the unknown target samples with high confidence and .
In this subsection, we construct a domain discriminator () to align the distributions for the known data by a tensor product, which can capture the multimodal structure of distribution. Furthermore, we construct a loss function to push the unknown data away from the known data to prevent the unknown data affecting distribution alignment.
V-C Training Procedure
We introduce the gradient reverse layer for adversary learning. The whole training procedure is shown in Algorithm 1. Firstly, we initialize the parameters of the generator (), the classifier () and the domain discriminator (
) (line 1). In each epoch, we divide data into multi minibatches (line 4-5). Then we calculate source risk (), binary adversarial loss () and according to Eqs. (21), (22), (23) (line 6-7). After selecting target samples with high confidence () (line 8), we calculate and according to Eqs. (26) and (27) (line 9). Finally, parameters are updated Via the SGD optimizer (line 10).
With the proposed method, in binary adversarial domain adaptation (, ), a coarse boundary between known data and unknown data can be found. Furthermore, -open set difference () can adequately estimate the risk of the classifier on unknown data, which is effective for the classifier to accurately recognize unknown target data. Then, we further align distributions of known data () and push unknown data away from known data () using a domain discriminator. Finally, combining these three modules, we can adequately solve the UOSDA problem.
Vi Experiments And Evaluations
In this section, we conducted extensive experiments on standard benchmark datasets (including transfer tasks) to demonstrate the effectiveness of our method. Several state-of-the-art UOSDA methods such ATI- [panareda2017open], OSBP [saito2018open], SCI_SCM [feng2019attract], STA [liu2019separate] and DAOD [fang2019open] are employed as our baselines.
Digits contains three digit datasets: MNIST (M) [lecun1998gradient], SVHN (S) [netzer2011reading], USPS (U) [hull1994database]. We construct three open set domain adaptation tasks as previous works [saito2018open]: S M, M U and U M. Following the protocol of [saito2018open], we select classes - as the known classes and classes - as the unknown classes of the target domain.
Office-31 [saenko2010adapting] is an object recognition dataset with imges, which consists of three domains with slight discrepancy: amazon (A), dslr (D) and webcam (W). Each domain contains kinds of object. So there are open set domain adaptation tasks on Office-31: A D, A W, D A, D W, W A, W D. We follow the open set protocol of [saito2018open], selecting the first classes in alphabetical order as the known classes and classes - as the unknown classes of the target domain.
Office-Home [venkateswara2017deep] is an object recognition dataset with image, which contains four domains with more obvious domain discrepancy than Office-31. These domains are Artistic (Ar), Clipart (Cl), Product (Pr), Real-World (Rw). Each domain contains kinds of objects. So there are open set domain adaptation tasks on Office-Home: Ar Cl, Ar Pr, Ar Rw, …, Rw Pr. Following the standard protocol, we chose the first classes as the known classes and - classes as the unknown classes of the target domain.
PIE [Rasouli_2019_ICCV] is a face recognition dataset, containing images of people with multifarious pose, illumination and expression. following the protocol of [fang2019open], We performed open set domain adaptation among out of poses and selected classes - as the known classes and classes - as the unknown classes of the target domain:x PIE1 (left pose), PIE2 (upward pose), PIE3 (downward pose), PIE4 (frontal pose) and PIE5 (right pose). We construct open set domain adaptation tasks, i.e., PIE1 PIE2, PIE1 PIE3, …, PIE5 PIE4.
Network structure. For the Digit
, we employ the similar convolution neural network as[shu2018a, saito2018open] for S M and other tasks, respectively, and train the DNNs from scratch. For Office-31, we leverage VGGNet [simonyan2014very] as backbone to extract features of images. We employ two fully-connected layers as the generator and one fully-connected layer as the classifier. For Office-Home, We leverage ResNet- [he2016deep] as backbone to extract features of images. The network structure of the generator and the classifier are the same as Office-31. PIE has provided valid features of all images. Therefore CNN is not necessary, and we adopted a similar generator and classifier as Office-31. Details about the network can be found in Appendix B. In the same manner as [saito2018open, feng2019attract], we do not update the parameters of the backbone during the training process.
Parameter setting. In the proposed method, there are two important parameters: and . We set as in all experiments, which is because distributional discrepancy is gradually approaching to during the process of domain adaptation and should be greater than or equal to when distributional discrepancy is . Besides, we set as for Office-31, for Digit and Office-Home, and for PIE. When the distributional discrepancy is relatively large, we advise that should be smaller for steady training. All experiment results are the accuracy averaged over three independent runs.
We compare our method with five UOSDA methods: ATI-, OSBP [saito2018open], SCA_SCM [feng2019attract], STA [liu2019separate], and DAOD [fang2019open]. We briefly introduce these baselines in the following.
ATI- [panareda2017open] employs an integer programming to assign the label for the target domain and a mapping matrix to align distribution.
OSBP [saito2018open] employs a classifier to align distributions between data (with known classes) in both source and target domains and an adversarial net to reject unknown samples through the probability of samples in the target domain.
SCA_SCM [feng2019attract] aligns the centroids between source and target and pushes unknown samples away from known classes to achieve a good performance.
STA [liu2019separate] utilizes a coarse-to-fine weight mechanism to separate unknown samples from the target domain and achieves distribution alignment simultaneously.
DAOD [fang2019open] trains a target-domain classifier via minimizing Eq. (14). The term, open set difference, is used to estimate the risk of the classifier on unknown classes.
Vi-D Evaluation Metrics
Following previous works [panareda2017open, saito2018open, fang2019open], we employ the two metrics below to evaluate our method. OS: average accuracy among all classes that include unknown classes. OS*: average accuracy among known classes.
where is the target classifier, and is the set of target samples with label .
|Dataset||A D||A W||D A||D W||W A||W D||Avg|
Results on three tasks of Digit datasets are shown in Table III, Obviously, our method achieves the best performance ( on OS and on OS*) within three tasks. Moreover, compared to U M and M U, M U is more challenging. There is a bigger distribution between S and M. Whereas on the most difficult task, our method still outperforms the best baseline STA by and on OS and OS* respectively. It is worth noting that DAOD is a shallow method, which cannot extract feature by convolutional neural network. Therefore there is no comparison on Digits. The results of ATI- are from [liu2019separate].
Results on standard benchmark object datasets (Office-31 and Office-Home) are recorded in Table IV. For Office-31, our method significantly outperforms baselines among