ProxyMix
None
view repo
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain. Owing to privacy concerns and heavy data transmission, source-free UDA, exploiting the pre-trained source models instead of the raw source data for target learning, has been gaining popularity in recent years. Some works attempt to recover unseen source domains with generative models, however introducing additional network parameters. Other works propose to fine-tune the source model by pseudo labels, while noisy pseudo labels may misguide the decision boundary, leading to unsatisfied results. To tackle these issues, we propose an effective method named Proxy-based Mixup training with label refinery (ProxyMix). First of all, to avoid additional parameters and explore the information in the source model, ProxyMix defines the weights of the classifier as the class prototypes and then constructs a class-balanced proxy source domain by the nearest neighbors of the prototypes to bridge the unseen source domain and the target domain. To improve the reliability of pseudo labels, we further propose the frequency-weighted aggregation strategy to generate soft pseudo labels for unlabeled target data. The proposed strategy exploits the internal structure of target features, pulls target features to their semantic neighbors, and increases the weights of low-frequency classes samples during gradient updating. With the proxy domain and the reliable pseudo labels, we employ two kinds of mixup regularization, i.e., inter- and intra-domain mixup, in our framework, to align the proxy and the target domain, enforcing the consistency of predictions, thereby further mitigating the negative impacts of noisy labels. Experiments on three 2D image and one 3D point cloud object recognition benchmarks demonstrate that ProxyMix yields state-of-the-art performance for source-free UDA tasks.
READ FULL TEXT VIEW PDFNone
The standard practice in the deep learning era—learning with massively labeled data—becomes expensive and laborious in many real-world scenarios. Besides, the learned models often perform poorly in generalization to new unlabeled domains due to the domain discrepancy
[1]. Hence, considerable efforts are devoted to unsupervised domain adaptation (UDA) [10, 24, 12, 31], which aims to transfer knowledge from a labeled source dataset to an unlabeled target dataset. In recent years, UDA methods have been widely explored in various tasks such as image classification [12] and semantic segmentation [53]. The key problem of UDA is to alleviate the gap across different domains. Prior UDA methods mainly fall into three paradigms. The first paradigm aims to pull the statistical moments of different feature distributions closer
[72, 6], and the second paradigm introduces adversarial training with additional discriminators [12, 54]. The last paradigm adopts various regularizations on the target network outputs like self-training or entropy-related objectives [76, 9]. Despite the impressive progress, the source data is always necessary during domain alignment, which might raise data privacy concerns nowadays.The practical demand directly motivates a novel UDA setting named source-free domain adaptation (SFDA) [26, 23], where only the well-trained source model instead of the well-annotated source dataset is provided to the target domain. The booming efforts in the SFDA community are either generation-based or pseudo label-based. The generation-based methods [23, 51, 41] introduce extra generative modules to recover the unseen source domain at image-level or feature-level, and then address this problem from a UDA perspective. Nevertheless, generative modules introduce additional parameters, and the recovered virtual source domain usually suffers from a mode collapse problem, which results in low-quality images or features. The pseudo label-based methods [41, 28, 60, 16] label the target samples based on the present model’s prediction or feature structure. However, due to the extreme domain shift, the noises are inescapable, result in inaccurate decision boundary.
To address the issues above (additional parameters and noisy labels), we propose a new and effective method called Proxy-based Mixup training with label refinery (ProxyMix), to deal with the source-free domain adaptation problem. To bridge the gap between the unseen source domain and the target domain while avoiding introducing extra parameters, we first select part of source-similar samples from the target domain rather than synthesize virtual images to construct a proxy source domain. Specifically, we define the weights of the source classifier as the class prototypes [36], then select the nearest neighbors for each class prototype in angle space to construct the proxy source domain. Priors methods with proxy source domain primarily employ entropy-criterion [28, 11], which select samples with lower entropy for each class from pseudo-labeled target data. In practice, as shown in Fig. 2, we observe that the mean accuracy of our angle-induced proxy source domain is clearly higher than the entropy criterion. Another significant benefit is that our pseudo labels are determined by the corresponding prototype, rather than the predictions from the source model, allowing us to create a class-balanced proxy source domain.
To improve the reliability of pseudo labels, we propose a frequency-weighted aggregation pseudo-labeling strategy (FA) as pseudo label refinery. FA includes three operations applied to the predictions: sharpening, re-weighting, and aggregation. Specifically, to avoid the ambiguous, we first sharpen the predictions of the classifier. At the same time, we take the frequency of each class into account and re-weight the probability of each class, to improve the contribution of low-frequency classes and avoid bias to majority and easy classes in the target domain during gradient updating. Then we introduce a non-parametric neighborhood aggregation strategy to pull the unlabeled target features close to their semantic neighbors, aiming to reduce the impact of outlier noisy labels and compact the semantic clusters.
With the proxy source domain, we tackle the challenging SFDA problem using a semi-supervised style with the aid of refined pseudo labels. To align the proxy and target domain, while alleviating the negative consequence of noisy labels, two mixup regularizations [73, 3, 2, 4], i.e., inter-domain and intra-domain mixup, are incorporated into our framework, enforcing the model to maintain consistency, thus improving the robustness against noisy labels. As illustrated in Fig. 1, the FA strategy refines the pseudo labels and compacts the feature clusters while the mixup training aligns the two domains, obtaining clear decision boundaries.
To summarize, the main contributions of this work are listed below in three-fold:
We propose a simple yet effective method, ProxyMix, for source-free domain adaptation, which aims to discover a proxy source domain and utilize mixup training to implicitly bridge the gap between the target domain and the unseen source domain.
To obtain a reliable proxy source domain, we exploit the network weights of the source model and select source-like samples from the target domain in an efficient and accurate way.
To refine the noisy pseudo labels during alignment, we further propose a new frequency-weighted aggregation strategy, compacting the target feature clusters and avoiding bias to majority and easy classes.
We conduct ablation study to verify the contribution and effectiveness of both proxy source domain construction and pseudo label refinery. Extensive results on four datasets further validate that ProxyMix yields comparable or superior performance to the state-of-the-art SFDA methods.
UDA aims to transfer knowledge from a label-rich source domain to an unlabeled target domain. UDA problems can be classified into four cases according to the relationship between the source and target domain, i.e., closed-set [44], partial-set [5], open-set [35], and universal [71]
. As a typical example of transfer learning, UDA provides methods to bridge domain gaps for various applications such as object recognition
[30, 12, 10, 22, 24, 62] and semantic segmentation [53, 76]. The most prevailing paradigm for UDA is to extract domain-invariant features to align different domains while preserving the category information from the labeled source domain. Roughly speaking, existing feature-level domain alignment could be divided into two different categories. The first line [12, 54, 31] aligns representations by fooling a domain discriminator through adversarial training, while the second line [30, 48] directly minimizes different discrepancy metrics (e.g., statistical moments) to match the feature distributions. Besides, another line [15] focuses on the image space alignment and converts the target image into a source style image (and visa versa). By contrast, output-level regularization methods [9, 17]achieve implicit domain alignment by forcing the target outputs to be diverse one-hot encodings.
[27] proposes an auxiliary classifier for target data to get the high-quality pseudo labels and [29] introduces cycle self-training by utilizing target pseudo labels to train another head and enforce them to perform well on the source domain. [63, 59] are the two most closely related works that introduce mixup training into adversarial UDA. However, our method does not require access to source data and develops a new pseudo label refinery strategy instead of focusing on the mix manner.SFDA aims to tackle the domain adaptation problem without accessing the raw source data. Before deep learning era, there are a number of transfer learning works [68, 52, 18, 8, 25] without source data that have been empirically successful. In recent years, pioneering works [26, 23] discover that the well-trained source model conceals sufficient source knowledge for the following target adaptation stage, and [26] provides a clear definition of this problem. The last two years have witnessed an increasing number of SFDA approaches [41, 28, 60, 16], most of which are generation based [23, 51, 41] or self-training [26, 69] based methods. Generation based methods [51, 41, 23, 66, 11] generate virtual high-level features of the source domain to bridge the unseen source and target distribution. Self-training based methods seek to refine the source model by using self-supervised techniques, with the pseudo label technique [26, 69] being the most extensively employed. [60, 16] learn from target samples by distinct variants of contrastive learning. [69] mines the hidden structure information such as the neighbor features to get the pseudo labels. However, generating source samples usually introduces additional modules such as generators or discriminators, while pseudo-labeling might lead to wrong labels due to domain shift, both of which cause negative effects on the adaptation procedure. Another practice [66, 11, 28] is selecting part of the target data as a pseudo source domain, to compensate for the unseen source domain. A typical method is entropy-criterion [28]
, which constructs the pseudo source domain by estimating a split ratio using the target dataset’s mean and maximum entropy, and then uses the split ratio to choose samples with lower entropy for all pseudo-labeled target domains within each class. The entropy-criterion provides a proxy source domain with a huge number of samples. However, the existence of hard classes and domain shift, causes the entropy-criterion to suffer from a severe class-imbalance problem. Despite the fact that
[11] attempts to tackle this problem by simply choosing the same number for each class, there is no data in some hard classes, so the class-imbalance problem is unavoidable. Unlike the previous works, our method builds the proxy source domain directly from the target domain using the source classifier weights, which is flexible and works well for SFDA. Besides, our mixup training strategy is also different from theirs, which transfers the label information from the proxy source to the unlabeled target domain.SSL aims to combine supervised learning and unsupervised learning, leveraging the vast amount of unlabeled data with limited labeled data to improve the performance of classifier and to deal with the scenarios where labeled data is scarce
[55]. As opposed to the domain adaptation problem, SSL deals with the samples from two identical domains. SSL has flourished in recent years [57, 40, 21], temporal ensemble [19]introduces self-ensembling, forming a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs; MixMatch
[3] proposes a holistic approach for data-augmented unlabeled examples and mixing labeled and unlabeled data using mixup; ReMixMatch [2] aligns the distribution of labeled and unlabeled data. FixMatch [47] demonstrates the strong performance of consistency regularizations and pseudo labels; AdaMatch [4]proposes a unified approach to solve the unsupervised domain adaptation, semi-supervised learning, and semi-supervised domain adaptation problems. Existing methods demonstrate the usefulness of mixup training in aligning distributions, and the growing popularity of SSL motivates us to convert the SFDA problem to an SSL challenge. Such methods use the true labels, which are not available in our task, and these labels provide strong and diverse supervision. Our data is pseudo-labeled, with little diversity and a lot of noise, so these semi-supervised learning approaches cannot be directly applied to our problem.
This paper mainly follows the problem definition of SHOT [26] and focuses on a -way visual classification task. We aim to learn a target model , and predict the label for an input target image with only target data and the well-trained source model . The model consists of two modules: the feature extractor and the classifier .
Following the standard paradigm of SFDA [26], as a preliminary, we train the source model with the label smoothing [34] technique:
(1) | ||||
where , is the one-hot encoding of , is the smoothing parameter, and is the soft-max output of the
-dimensional vector
.During adaptation, we directly initialize the target model with the well-trained source model , then freeze the classifier and fine-tune the feature extractor to ensure the target features are implicitly aligned with unseen source features via a same hypothesis. It is worth noting that we do not adopt the special design of normalization techniques of SHOT [26] for simplicity and commonality.
Recently, semi-supervised learning approaches [3, 2] have also shown impressive achievements on UDA problem, and Rukhovich et al. [42] even wins the VisDA competition by directly exploiting MixMatch [3] in 2019. Inspired by them, we construct the proxy source domain by pseudo-labeling portions of confident samples (source-similar samples), and try to solve the SFDA task in a semi-supervised style. Since the source data is unavailable, we expect to mine the source information from the model . Previous works [50, 70] leverage the weights of classifier as class prototypes in other fields, and obtain positive results. Another classical practice [36] exposes that the classifier weight vector of a well-trained last-layer classifier converges to a high-dimension geometry structure, which maximally separates the pair-wise angles of all classes in the classifier. Therefore, inspired by these works, it is natural to select the nearest neighbors of classifiers’ weights in angle space to construct the proxy source domain. Concretely, we first define the weights of the classifier as the class prototypes, where is the number of categories. We use the class prototype as the cluster centroid to search and pseudo-label nearest samples in the unlabeled target domain for the purpose of forming proxy source domain :
(2) | ||||
where |
and denotes choosing samples with minimum distance for each class, is a hyper-parameter, deciding how many samples we select in each class. To prevent the negative consequences caused by class imbalance, we select the same number of samples for each class. measures the distance between and
in angle space, we use the cosine similarity by default. For these proxy source data, we directly calculate the cross entropy loss with labeling smoothing in the following,
(3) | ||||
where is the smoothed label, denotes the one-hot encoding of .
Pseudo-labeling is a heuristic approach to semi-supervised learning, which progressively treats the predictions on unlabeled data as true labels, and often employs cross-entropy loss during training. However, in an unsupervised learning setting, the class distribution is unknown, and the model is biased towards easy classes. To mitigate the imbalance and sensitivity of pseudo labels, inspired by several classical works
[27, 61], we propose a new pseudo label refinery strategy to get reliable soft pseudo labels in the presence of domain shift. In specific, we adjust the class distribution of the prediction to alleviate the class imbalance, and then we use the center of semantic neighbors as the pseudo label, rather than depending on a single prediction. This compacts the cluster by pulling the unlabeled target features closer to their semantic neighbors, resulting in a clear classification boundary. Note that hard labels reinforce the confidence of the current model, while losing some information. Hence we use the soft predictions rather than the one-hot vectors as the pseudo labels, which are able to provide more distribution information and decrease the negative effect of corrupted one-hot labels.Neighborhood Aggregation. To leverage the local data structure, we employ the neighborhood aggregation strategy, which is based on the idea of message passing via neighbors, to adjust the predictions of the input target data. Concretely, we construct a large memory bank to store both the features and the predictions of target data. During pseudo-labeling, we retrieve nearest neighbors from the memory bank for each sample in the current mini-batch according to their features , and calculate the soft label of data point by aggregating these predictions of feature-level neighbors:
(4) |
where is the neighbor index set of the data , are the frequency-weighted predictions of neighbors stored in the bank, then we explain how these predictions are obtained.
Frequency-weighted prediction. As illustrated in Fig. 4, to avoid ambiguity, we first sharpen the calculated output predictions
. Besides, the network will be empirically skewed towards these majority classes due to the class imbalance. Then, we further multiply the predictions by a weight based on the frequency of the class. In specific, given the soft-max output predictions
, the frequency-weighted predictions can be obtained through(5) |
where are soft cluster frequencies calculated by the current batch of samples. Through the operation above, we expect to achieve class-balance in the predictions. At each iteration, we update the features and predictions associated with the data in the corresponding location in the memory bank.
Two mixup training procedures are incorporated in our method. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels to regularize the network to support linear behavior in-between training samples. Pioneers have proved the effectiveness of mixup training on UDA and SSL tasks
[73, 3, 2, 42]. Such a simple regularization can improve the generalization and the robustness to some noisy labels, so it is suitable for pseudo label-based unsupervised learning tasks. Inspired by these methods, with the prototype-induced pseudo source domain and target domain , we introduce two different regularizations via mixup training.Inter-domain Mixup. To align the proxy source domain and the target domain, we employ inter-domain mixup regularization. [3] mixes the labeled data with both unlabeled data and labeled data itself. However, the “labeled” data in our case is not completely trustworthy. As a result, we do not add any mixup training between the proxy source samples, but only between the pseudo source domain and the target domain only, constructing in virtual training samples below:
where denotes the one-hot encoding of , and is the soft label of calculated by Eq. (4),
is the mixup coefficient sampled from a random Beta distribution.
Then we adopt the KL divergence to calculate the soft label classification loss:
(6) |
Intra-domain Mixup. To mine the inner structure of the target domain, we also adopt the mixup regularization between different target data. As it is typical in many SSL methods, we use data augmentation on target data. In specific, for each mini-batch of target data , we concatenate it with its augmented version to construct a vector notated as . Then we mixup and its shuffled version to construct the virtual training samples below:
where is the shuffled version of , and are the soft label of and calculated by Eq. (4), respectively. Then we formulate the intra-domain mixup regression loss as:
(7) |
Note here we use square loss. Unlike the cross entropy loss used in Eq. (6), it is bounded and more robust due to the insensitivity to corrupted labels.
Combining the proxy source classification loss and two types of mixup loss, our overall objective is formulated as:
(8) |
where and are trade-off parameters to balance losses. The overall pipeline of ProxyMix is illustrated in Algorithm 1.
Datasets. We conduct the experiments on four popular benchmark datasets: (1) Office-31 [43] is a standard domain adaptation dataset consisting of three distinct domains, i.e., Amazon (A), DSLR (D) and Webcam (W), and 31 categories in the shared label space. The specific numbers of images for each domain are 2,817 (A), 498 (D) and 795 (W), therefore the dataset suffers from severe data imbalance. (2) Office-home [56] is a medium-sized domain adaptation dataset with 15,500 images collected from four domains Art (Ar), Clipart (Cl), Product (Pr), and Real-World (Re). There are 65 categories per domain, which is much more than Office-31. (3) VisDA [37] is a large-scale challenging dataset which consists of a 12-class synthesize-to-real object recognition task. The source domain involves 152k synthetic images which are produced by 3D rendering model under various conditions. The target domain contains 55k images collected from the real-world scene. (4) PointDA-10 [39] is a common-used 3D cloud-point dataset extracted from three popular 3D object/scene datasets, i.e, modelnet (M) shapenet (S), and scannet () for cross-domain 3D object recognization. Each domain contains its own training and testing sets. We train our models by source and target domain’s training set, and show the test resutls on the target domain’s test set.
Baselines. We compare ProxyMix with the state-of-the-art source-free domain adaptation methods: SHOT [26], CPGA [41], [60], HCL [16], NRC [69], SSFT-SSD [66], PS [11]. Moreover, to illustrate the effectiveness of ProxyMix, we further compare our method with the state-of-the-art UDA methods: SymNets [74], TADA [58], BNM [9], BDG [67], SRDC [49], RSDA-MSTN [13], ADR [45], CDAN [31], CDAN+BSP [7], SAFN [65], SWD [20], MDD [75], DMRL [59], MCC [17], STAR [33], RWOT [64], ATDOC [27], MMD [32], DANN [12], ADDA [54], MCD [46], PointDAN [39]. We use bold to highlight the best results and underline to highlight the second best results among source-free methods.
Implementation Details.
We implement our method based on PyTorch. For network architecture, we adopt ResNet
[14], pretrained on the ImageNet as the backbone, and replace the original fully connected layer with a bottleneck layer followed by a task-specific linear layer. In the source model training stage, we exploit SGD optimizer with learning rate
for backbone and for the bottleneck and classifier. In the target adaptation stage, we use SGD optimizer with learning rate for the backbone and freeze the fully connected classification layer. The numbers of epoch are set to 30, 50, 5 in training stage and 50, 50, 1 in adaptation stage for Office-31, Office-home and VisDA, respectively. Specially, for PointDA-10, we follow the open source code of NRC
[69], use PointNet [38] as our backbone network, learning rate and Adam optimizer with 100 epochs each stage. For the hyper-parameters, considering the confidence of pseudo labels, we set , , and we alter and linearly by multiplying a ratio that varies linearly from 0 to 1 based on the number of the current iteration. Besides, we set , beta distribution parameter in mixup and for Office-31, Office-home, PointDA-10 and VisDA. All results are the averages of three random runs with seed {0, 1, 2}.2D image datasets. We first compare our method with the state-of-the-art methods on 2D image datasets in Table I, II, and III. Note that the results of other methods are from the original papers, except ours. On Office-home, we achieve the best results on three tasks, and the highest mean accuracy, demonstrating the effectiveness of ProxyMix to deal with the multi-class classification problem on the medium-size dataset. On Office-31, we also achieve the highest mean accuracy among SFDA methods, validating the efficacy of ProxyMix handling with small datasets with fewer categories. On VisDA, we achieve the best results on four single tasks and a comparable mean accuracy with the state-of-the-art methods. The reason why the performance on VisDA is not as good as the first two may be that the scale of the proxy source domain is too small relative to the entire dataset, which causes the network to have a certain bias towards the proxy source domain. In summary, our method ProxyMix achieves competitive accuracy across three benchmarks when compared with others, which demonstrates the effectiveness in dealing with the standard 2D image domain adaptation benchmarks. We achieve similar results compared with the state-of-the-art SFDA methods [60] (ICCV-21) and NRC [69] (NeurIPS-21), and UDA method ATDOC [27] (CVPR-21). The presented results clearly demonstrate the efficacy of the proposed method in dealing with domain-imbalanced, multi-class and large-scale challenges.
3D point cloud dataset. To explore the generalization performance of ProxyMix on 3D data, we also report the results for the PointDA-10 dataset in Table IV. Without any extra modules, our method achieves the highest average accuracy on the benchmark, even compared with UDA methods and the 3D cloud point domain adaptation method PointDAN [39].
Choices of soft label | Office-31 | Office-home | VisDA |
---|---|---|---|
MixMatch [3] | 88.4 | 72.4 | 83.0 |
ReMixMatch [2] | 88.1 | 71.3 | 80.2 |
ATDOC [27] | 88.5 | 72.2 | 84.7 |
Ours | 90.1 | 72.8 | 85.7 |
Variants | Office-31 | Office-home | VisDA |
---|---|---|---|
w/o aggregation | 88.4 | 71.3 | 82.4 |
w/ aggregation (Ours) | 90.1 | 72.8 | 85.7 |
Method | Office-31 | Office-home | VisDA |
---|---|---|---|
Random-selected | 83.9 | 69.0 | 81.9 |
Entropy-guided | 86.3 | 70.5 | 72.6 |
Ours | 90.1 | 72.8 | 85.7 |
Office-31 | Office-home | VisDA | |||
---|---|---|---|---|---|
83.5 | 66.3 | 69.6 | |||
89.1 | 72.4 | 78.5 | |||
86.7 | 65.8 | 84.9 | |||
89.3 | 72.3 | 78.4 | |||
89.9 | 71.3 | 84.7 | |||
90.1 | 72.8 | 85.7 |
Ablation study on the loss functions.
To explore the effectiveness of the proposed pseudo-labeling strategy, the aggregation strategy, the construction method of proxy source domain, we conduct a series of ablation analysis on the three common-used 2D image classification datasets Office-31, Office-home and VisDA. Then we explore the influence of three loss functions in our method, the training stability, and the sensitivity of the important hyper-parameters. We also show the t-SNE visualization results of task ArCl to clearly validate the altering of features.
Effectiveness of the proposed frequency-weighted aggregation soft pseudo label. Our frequency-weighted aggregation strategy (FA) is a soft pseudo label generation method. To verify the influence, we compare our method with three label refinery strategies. 1) MixMatch [3] calculates the soft pseudo label by sharpening and normalizing the predictions directly. 2) ReMixMatch [2] sharpens the predictions first, then multiplies a distribution alignment ratio calculated by the current batch of samples. 3) ATDOC [27] only uses the highest possibilities that are multiplied by a balanced ratio, causing the sums to not be equal to 1, which is not conducive to the calculation of KL divergence. Therefore, we normalize the predictions of ATDOC in our experiments. The results shown in Table V demonstrate that the proposed frequency-weighted aggregation module effectively improves the soft label’s reliability.
(a) Influence of the weight . | (b) Influence of the weight . | (c) Influence of N. |
(a) Before adaptation. | (b) After adaptation. |
(c) Before adaptation. | (d) After adaptation. |
Effectiveness of the aggregation strategy. Our aggregation technique pulls unlabeled target data to semantic neighbors, allowing us to investigate the target domain’s structure information and mitigate the detrimental effects of noisy labels. Table VI shows the variant of ProxyMix without the aggregation approach to demonstrate the usefulness of the aggregation strategy. The accuracy of standard ProxyMix is higher than that of variants without aggregation, demonstrating that leveraging the semantic neighbors’ center as the pseudo label is effective and reliable.
Analysis of the construction method of proxy source domain. To study the influence of the proposed construction method of the class-balanced proxy source domain, we compare ProxyMix with a common-used method, i.e., i.e., randomly-selected criterion, entropy-guided criterion, and the baseline method. 1) Randomly-selected: to ensure fairness, we randomly select N samples for each class from the target data to generate a class-balanced proxy source domain based on the classification results of the source model. Because we cannot discover N examples for some difficult classes, we choose the remaining numbers of samples from other classes at random as compensation. 2) Entropy-guided: as commonly used in other works [28], we compare our method with the entropy-guided method. In specific, we calculate the mean entropy of the source model’s prediction on the full target dataset, then obtain a split ratio , where denotes the size of the subset formed by samples which satisfy the condition , is the entropy function. Then we compute the class distribution according to the predictions given by the source model, and select samples with the lowest entropy for each class. The results are shown in Table VII. Random-selected perform unsatisfactory due to the poor confidence of the source model before adaption. Although the entropy-criterion reflects the confidence of the prediction, it exacerbates the class imbalance problem and leads the model bias to the easier classes, which is not satisfactory in comparison to ours. The proposed prototype-induced method achieves the highest accuracy. We take both confidence and class-balance into consideration, and as illustrated in Fig. 2, we observed that the accuracy of the proxy source domain is higher than the entropy-criterion.
Ablation studies on the proposed loss functions. To investigate the proposed loss functions, we show the results of variants with different combinations of loss functions in Table VIII. As shown, without the proxy source domain classification loss , the accuracy of Office-31 has the biggest drop. The accuracy of Office-home is more likely to be influenced by the inter-domain mixup loss . As for the large-scale dataset VisDA, the intra-domain mixup loss contributes a lot. The effectiveness of and also illustrate the reliability of the proposed frequency-weighted soft labels from another perspective.
Training stability. We show the accuracy curve of task ArCl on Office-home in Fig. 5, the accuracy during training grows up quickly and then converges as we expected. Therefore, the training procedure of ProxyMix is stable and reliable.
Sensitivity of hyper-parameters. To better understand the effects of the hyper-parameters , and , we explore their performance sensitivity in a single task ArCl on Office-home in Fig. 6. The accuracies around and fluctuate very softly in (a) and (b). The results on the proxy source domain scale are provided in (c), shows that the accuracies change slightly around . Generally, in our method ProxyMix, the hyper-parameters are not sensitive.
t-SNE visualization. To evaluate the effectiveness of ProxyMix, We show the t-SNE visualization^{1}^{1}1https://lvdmaaten.github.io/tsne/ of target features on task ArCl in Fig. 7. To validate the effectiveness of domain alignment, we show the features of the unseen source domain (blue points) and the target domain (red points) in (a) and (b). The distribution of target features is closer to the source feature after adaptation as we expected. We also show the target feature distribution of the first 10 classes of Office-home in (c) and (d). Benefiting from our frequency-weighted aggregation strategy, the feature clusters after adaptation are compact, and the classification boundary is clear.
In this paper, we focus on the source-free domain adaptation problem, and propose a simple yet effective method named Proxy-based Mixup training with label refinery (ProxyMix). In specific, we treat weights of the fully-connected layer as class prototypes to choose a series of confident samples to construct a class-balanced proxy source domain. Then label information is expected to flow from the pseudo source domain to the unlabeled target domain via mixup training. To enhance mixup training, we further introduce a new pseudo label refinery strategy, which combines frequency-weighted sharpening and neighborhood aggregation to obtain reliable soft predictions of unlabeled target data. Experiments on four popular benchmarks prove the effectiveness of ProxyMix without access to source data. Although our method outperforms several UDA methods that are based on source data, we should recognize that removing all noisy labels in an unsupervised manner is still tough. We believe that our work is an attempt in that direction, with the intention of inspiring others in the UDA community.
Unsupervised domain adaptation by backpropagation
. In Proc. ICML, Cited by: §I, §II-A, TABLE IV, §IV.Transfer feature learning with joint distribution adaptation
. In Proc. ICCV, Cited by: TABLE IV, §IV.Unsupervised deep embedding for clustering analysis
. In Proc. ICML, Cited by: §III-B.