Semi-supervised Domain Adaptation via Minimax Entropy
Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision. However, we show that these techniques perform poorly when even a few labeled examples are available in the target. To address this semi-supervised domain adaptation (SSDA) setting, we propose a novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model. Our base model consists of a feature encoding network, followed by a classification layer that computes the features' similarity to estimated prototypes (representatives of each class). Adaptation is achieved by alternately maximizing the conditional entropy of unlabeled target data with respect to the classifier and minimizing it with respect to the feature encoder. We empirically demonstrate the superiority of our method over many baselines, including conventional feature alignment and few-shot methods, setting a new state of the art for SSDA.READ FULL TEXT VIEW PDF
In this work we address the problem of transferring knowledge obtained f...
Contemporary domain adaptive semantic segmentation aims to address data
Domain Adaptation has been widely used to deal with the distribution shi...
We propose a probabilistic framework for domain adaptation that blends b...
A new framework based on the theory of copulas is proposed to address se...
Visual domain adaptation involves learning to classify images from a tar...
In this paper, we propose a novel domain adaptation method for the
Semi-supervised Domain Adaptation via Minimax Entropy
Right lane segmentation in Duckietown environment
Implementation of MME
We propose a novel approach for SSDA that overcomes the limitations of previous methods and significantly improves the accuracy of deep classifiers on novel domains with only a few labels per class. Our approach, which we call Minimax Entropy (MME), is based on optimizing a minimax loss on the conditional entropy of unlabeled data, as well as the task loss; this reduces the distribution gap while learning discriminative features for the task.
We exploit a cosine similarity-based classifier architecture recently proposed for few-shot learning[11, 4]
. The classifier (top layer) predicts a K-way class probability vector by computing cosine similarity between K class-specific weight vectors and the output of a feature extractor (lower layers), followed by a softmax. Each class weight vector is an estimated “prototype” that can be regarded as a representative point of that class. While this approach outperformed more advanced methods in few-shot learning and we confirmed its effectiveness in our setting, as we show below it is still quite limited. In particular, it does not leverage unlabeled data in the target domain.
Our key idea is to minimize the distance between the class prototypes and neighboring unlabeled target samples, thereby extracting discriminative features. The problem is how to estimate domain-invariant prototypes without many labeled target examples. The prototypes are dominated by the source domain, as shown in the leftmost side of Fig. 2 (bottom), as the vast majority of labeled examples come from the source. To estimate domain-invariant prototypes, we move weight vectors toward the target feature distribution. Entropy on target examples represents the similarity between the estimated prototypes and target features. A uniform output distribution with high entropy indicates that the examples are similar to all prototype weight vectors. Therefore, we move the weight vectors towards target by maximizing the entropy of unlabeled target examples in the first adversarial step. Second, we update the feature extractor to minimize the entropy of the unlabeled examples, to make them better clustered around the prototypes. This process is formulated as a mini-max game between the weight vectors and the feature extractor and applied over the unlabeled target examples.
Our method offers a new state-of-the-art in performance on SSDA; as reported below, we reduce the error relative to baseline few-shot methods which ignore unlabeled data by 8.5%, relative to current best-performing alignment methods by 8.8%, and relative to a simple model jointly trained on source and target by 11.3% in one adaptation scenario. Our contributions are summarized as follows:
We highlight the limitations of state-of-the-art domain adaptation methods in the SSDA setting;
We propose a novel adversarial method, Minimax Entropy (MME), designed for the SSDA task;
We show our method’s superiority to existing methods on benchmark datasets for domain adaptation.
, however it has not been fully explored, especially with regard to deep learning based methods. We revisit this task and compare our approach to recent semi-supervised learning or unsupervised domain adaptation methods. The main challenge in domain adaptation (DA) is the gap in feature distributions between domains, which degrades the source classifier’s performance. Most recent work has focused on unsupervised domain adaptation (UDA) and, in particular, feature distribution alignment. The basic approach measures the distance between feature distributions in source and target, then trains a model to minimize this distance. Many UDA methods utilize a domain classifier to measure the distance[10, 35, 18, 19]. The domain-classifier is trained to discriminate whether input features come from the source or target, whereas the feature extractor is trained to deceive the domain classifier to match feature distributions. UDA has been applied to various applications such as image classification , semantic segmentation , and object detection [5, 28]. Some methods minimize task-specific decision boundaries’ disagreement on target examples [29, 27]
to push target features far from decision boundaries. In this respect, they increase between-class variance of target features; on the other hand, we propose to make target features well-clustered around estimated prototypes. Our MME approach can reduce within-class variance as well as increasing between-class variance, which results in more discriminative features. Interestingly, we empirically observe that UDA methods[10, 19, 27] often fail in improving accuracy in SSDA.
Semi-supervised learning (SSL). Generative [6, 30], model-ensemble , and adversarial approaches  have boosted performance in semi-supervised learning, but do not address domain shift. Conditional entropy minimization (CEM) is a widely used method in SSL [12, 9]. However, we found that CEM fails to improve performance when there is a large domain gap between the source and target domains (see experimental section.) MME can be regarded as a variant of entropy minimization which overcomes the limitation of CEM in domain adaptation.
Few-shot learning (FSL). Few shot learning [33, 37, 25] aims to learn novel classes given a few labeled examples and labeled “base” classes. SSDA and FSL make different assumptions: FSL does not use unlabeled examples and aims to acquire knowledge of novel classes, while SSDA aims to adapt to the same classes in a new domain. However both tasks aim to extract discriminative features given a few labeled examples from a novel domain or novel classes. We employ a network architecture with normalization on features before the last linear layer and a temperature parameter , which was proposed for face verification  and applied to few-shot learning [11, 4]. Generally, classification of a feature vector with a large norm results in confident output. To make the output more confident, networks can try to increase the norm of features. However, this does not necessarily increase the between-class variance because increasing the norm does not change the direction of vectors. normalization on feature vectors can solve this issue. To make the output more confident, the network focuses on making the direction of the features from the same class closer to each other and separating different classes. This simple architecture was shown to be very effective for few-shot learning  and we build our method on it in our work.
In semi-supervised domain adaptation, we are given source images and the corresponding labels in the source domain . In the target domain, we are also given a limited number of labeled target images , as well as unlabeled target images . Our goal is to train the model on and evaluate on .
Inspired by , our base model consists of a feature extractor and a classifier . For the feature extractor , we employ a deep convolutional neural network and perform normalization on the output of the network. Then, the normalized feature vector is used as an input to which consists of weight vectors where represents the number of classes and a temperature parameter . takes as an input and outputs
. The output of C is fed into a softmax-layer to obtain the probabilistic output. We denote , where indicates a softmax function. In order to classify examples correctly, the direction of a weight vector has to be representative to the normalized features of the corresponding class. In this respect, the weight vectors can be regarded as estimated prototypes for each class. An architecture of our method is shown in Fig. 3.
We estimate domain-invariant prototypes by performing entropy maximization with respect to the estimated prototype. Then, we extract discriminative features by performing entropy minimization with respect to feature extractor. In our method, the prototypes are parameterized by the weight vectors of the last linear layer. First, we train and to classify labeled source and target examples correctly and utilize an entropy minimization objective to extract discriminative features for the target domain. We use a standard cross-entropy loss to train and for classification:
With this classification loss, we ensure that the feature extractor generates discriminative features with respect to the source and a few target labeled examples. However, the model is trained on the source domain and a small fraction of target examples for classification. This does not learn discriminative features for the entire target domain. Therefore, we propose minimax entropy training using unlabeled target examples.
A conceptual overview of our proposed adversarial learning is illustrated in Fig. 2. We assume that there exists a single domain-invariant prototype for each class, which can be a representative point for both domains. The estimated prototype will be near source distributions because source labels are dominant. Then, we propose to estimate the position of the prototype by moving each toward target features using unlabeled data in the target domain. To achieve this, we increase the entropy measured by the similarity between and unlabeled target features. Entropy is calculated as follows,
where K is the number of classes and represents the probability of prediction to class , namely th dimension of . To have higher entropy, that is, to have uniform output probability, each should be similar to all target features. Thus, increasing the entropy encourages the model to estimate the domain-invariant prototypes as shown in Fig. 2.
To obtain discriminative features on unlabeled target examples, we need to cluster unlabeled target features around the estimated prototypes. We propose to decrease the entropy on unlabeled target examples by the feature extractor . The features should be assigned to one of the prototypes to decrease the entropy, resulting in the desired discriminative features. Repeating this prototype estimation (entropy maximization) and entropy minimization process yields discriminative features.
To summarize, our method can be formulated as adversarial learning between and . The task classifier is trained to maximize the entropy, whereas the feature extractor is trained to minimize it. Both and are also trained to classify labeled examples correctly. The overall adversarial learning objective functions are:
where is a hyper-parameter to control a trade-off between minimax entropy training and classification on labeled examples. Our method can be formulated as the iterative minimax training. To simplify training process, we use a gradient reversal layer  to flip the gradient between and with respect to . With this layer, we can perform the minimax training with one forward and back-propagation, which is illustrated in Fig. 3.
As shown in , we can measure domain-divergence by using a domain classifier. Let be a hypothesis, and be the expected risk of source and target respectively, then where is a constant for the complexity of hypothesis space and the risk of an ideal hypothesis for both domains and is the -divergence between and .
where and denote the features in the source and target domain respectively. In our case the features are outputs of the feature extractor. The -divergence relies on the capacity of the hypothesis space to distinguish distributions and . This theory states that the divergence between domains can be measured by training a domain classifier and features with low divergence are the key to having a well-performing task-specific classifier. Inspired by this, many methods [10, 2, 35, 34] train a domain classifier to discriminate different domains while also optimizing the feature extractor to minimize the divergence.
Our proposed method is also connected to Eq. 4. Although we do not have a domain classifier or a domain classification loss, our method can be considered as minimizing domain-divergence through minimax training on unlabeled target examples. We choose to be a classifier that decides a binary domain label of a feature by the value of the entropy, namely,
where denotes our classifier, denotes entropy, and is a threshold to determine a domain label. Here, we assume outputs the probability of the class prediction for simplicity. Eq. 4 can be rewritten as follows, In the last inequality, we assume that . This assumption should be realistic because we have access to many labeled source examples and train entire networks to minimize the classification loss. Minimizing the cross-entropy loss (Eq. 1) on source examples ensures that the entropy on a source example is very small. Intuitively, this inequality states that the divergence can be bounded by the ratio of target examples having entropy greater than . Therefore, we can have the upper bound by finding the that achieves maximum entropy for all target features. Our objective is finding features that achieve lowest divergence. We suppose there exists a that achieves the maximum in the inequality above, then the objective can be rewritten as,
Finding the minimum with respect to is equivalent to find a feature extractor that achieves that minimum. Thus, we derive the minimax objective of our proposed learning method in Eq . 3. To sum up, our maximum entropy process can be regarded as measuring the divergence between domains, whereas our entropy minimization process can be regarded as minimizing the divergence. In our experimental section, we observe that our method actually reduces domain-divergence (Fig. 5(c)). In addition, target features produced by our method look aligned with source features and are just as discriminative. These come from the effect of the domain-divergence minimization.
|Net||Method||R to C||R to P||P to C||C to S||S to P||R to S||P to R||MEAN|
We randomly selected one or three labeled examples per one class as the labeled training target examples. We call each one-shot and three-shot setting respectively. We selected three other labeled examples as the validation for the target domain. The validation examples are used for early stopping, deciding hyper-parameter , and training scheduling. The other unlabeled target examples are used for training and evaluating classification accuracy (%). All examples of the source are used for training.
Datasets. LSDAC  is a benchmark dataset for large-scale domain adaptation. It has 345 classes and 6 domains. Since this dataset has various classes and many examples, we mainly used this dataset in the experiment. As labels of some domains and classes are very noisy, we pick 4 domains (Real, Clipart, Painting, Sketch) and 126 classes. We focus on the adaptation scenario where the target domain is different from real and picked up 7 scenarios from the domains. See our supplemental material for the detail. Office-Home  contains 4 domains (Real, Clipart, Art, Product) with 65 classes. This dataset is one of the benchmark datasets for unsupervised domain adaptation. We evaluated our method on 12 scenarios in total. Office  contains 3 domains (Amazon, Webcam, DSLR) with 31 classes. Webcam and DSLR are small domains and some classes do not have a lot of examples while Amazon has many examples. To evaluate on the domain with enough examples, we have 2 scenarios where we set Amazon as the target domain and DSLR and Webcam as the source domain.
All experiments are implemented in Pytorch. We employ AlexNet  and VGG16  in the experiments. To investigate the effect of deeper architectures, we use ResNet34  in experiments on LSDAC. We remove the last linear layer of these networks to build . we add a K-way linear classification layer with randomly initialized weight matrix . The value of temperature is set 0.05 following the results of  in all settings. Every iteration, we prepared two mini-batches. One of them consists of labeled examples while the other one consists of unlabeled target examples. The half of labeled examples comes from source and the other half comes from labeled target. Using the two mini-batches, we calculated the objective in Eq. 3. To implement the adversarial learning in Eq. 3, we use a gradient reversal layer [10, 35] to flip the gradient with respect to entropy loss. The sign of the gradient is flipped between and
during backpropagation. In all experiments, we set the trade-off parameterin Eq. 3 as . This is decided by the validation performance on Real to Clipart experiments. We will also show the performance sensitivity to this parameter in our supplemental material. We adopt SGD with the momentum of 0.9. Please see our supplemental material for more details including learning rate scheduling. We will publicize our implementation and dataset splits we used.
Baselines. S+T [4, 24] is a model trained with the labeled source and unlabeled target examples without using unlabeled target examples. DANN  employs a domain classifier to match feature distributions. This is the most popular method of UDA. For fair comparison, we modify this method so that it is trained with the labeled source, labeled target, and unlabeled target examples. ADR  utilizes a task-specific decision boundary to align features. This method is proposed to extract discriminative target features. CDAN  is one of the state-of-the art methods on UDA. This method performs domain alignment on features that are conditioned on the output of classifiers. In addition, it utilizes entropy minimization on target examples. CDAN integrates domain-classifier based alignment and entropy minimization. Comparison with the UDA methods (DANN, ADR, CDAN) reveals how much gain will be obtained compared to the existing domain alignment-based methods. ENT  is a model trained with labeled source and target and unlabeled target using entropy minimization. Entropy is calculated on unlabeled target examples and entire network is trained to minimize it. The difference from MME is that ENT does not have a maximization process. Comparison with this baseline clarifies the importance of the entropy maximization process.
Please note that all methods except for CDAN are trained with exactly the same architecture used in our method in this experiment. In case of CDAN, we could not find advantage of using the architecture. The detail of baseline implementations is in our supplemental material.
Overview. The main results on the LSDAC dataset are shown in Table 1. First of all, our method outperformed other baselines for all adaptation scenarios and all three networks except for one case where our method showed equivalent performance to ENT. On average, our method outperformed S+T with 9.5 % and 8.9 % in ResNet one-shot and three-shot setting respectively. The results on Office-Home and Office are summarized in Table 2. Due to the limited space, we show the results averaged on all adaptation scenarios. Our method outperformed all baselines on these datasets too.
Comparison with UDA Methods. Generally, baseline UDA methods (DANN, ADR and CDAN) need strong base networks such as VGG or ResNet to perform better than S+T. Interestingly, these methods cannot improve the performance in some cases. The superiority of MME over existing UDA methods is supported by Tables 1 and 2. Since CDAN uses entropy minimization and ENT significantly hurts the performance for AlexNet and VGG, CDAN does not consistently improve the performance for AlexNet and VGG.
Comparison with Entropy Minimization
. ENT does not improve performances in some cases because it does not account for domain gap. Comparing results on one-shot and three-shot, entropy minimization gains performance with the help of labeled examples. As we have more labeled target examples, the estimation of prototypes will be more accurate without any adaptation methods. In case of ResNet, entropy minimization often improves the performance. There are two reasons. First, ResNet pre-trained on ImageNet have discriminative representations. Therefore, given a few labeled target examples, the model can extract discriminative features, which contributed to the performance gain in entropy minimization. Second, ResNet has batch-normalization (BN) layers. It is reported that BN has the effect of aligning feature distributions [3, 17]. Hence, entropy minimization was done on aligned feature representations, which improved the performance. When there is a large domain gap such as C to S, S to P, and R to S in Table 1, BN is not enough to handle the domain gap. Therefore, our proposed method performs much better than entropy minimization in such cases. We show the analysis on BN in our supplemental material. This analysis reveals the effectiveness of BN for entropy minimization.
|Method||R - C||R - P||P - C||C - S||S - P||R - S||P - R||Avg|
|Method||R to C||R to S|
|S+T (Standard Linear)||41.4||44.3||26.5||28.7|
|S+T (Few-shot [4, 24])||43.3||47.1||29.1||33.3|
|MME (Standard Linear)||44.9||47.7||30.0||32.2|
|MME (Few-shot [4, 24])||48.9||55.6||33.3||37.9|
Varying Number of Labeled Examples. First, we show the results on unsupervised domain adaptaton setting in Table 3. Our method performed better than other methods on average. In addition, only our method improved performance compared to source only model in all settings. Furthermore, we observe the behavior of our method when the number of labeled examples in the target domain varies. The number of labeled examples is varied from 0 to 20. The results are shown in Fig. 4. Our method works much better than S+T given a few labeled examples. On the other hand, ENT needs 5 labeled examples per class to improve the performance. As we have more labeled examples, the performance gap between ENT and ours is reduced. This result is quite reasonable because prototype estimation will become more accurate without any adaptation method as we have more labeled examples.
Effect of Network Architecture. We introduce an ablation study on the network architecture proposed in [4, 24] with AlexNet on LSDAC. As shown in Fig. 3, we employ normalization and temperature scaling. In this experiment, we compared it with a model having a standard linear layer without normalization and temperature. The result is shown in Table 4. By using the network architecture proposed in [4, 24], we can improve the performance of both our method and baseline S+T model. S+T model is trained only on source examples and a few labeled target examples. Therefore, we can argue that the network architecture is an effective technique to improve the performance when we are given a few labeled examples from the target domain.
Feature Visualization. In addition, we plot the learned features with t-SNE  in Fig. We employ the scenario Real to Clipart of LSDAC using AlexNet as pre-trained model. Fig 5 (a-d) visualizes the target features and estimated prototypes. The color of the cross represents its class and black points are the prototypes. With our method, the target features are clustered to their prototypes and do not have a large variance within the class. We visualize features on the source domain (red cross) and target domain (blue cross) in Fig. 5 (e-h). As we discussed in the method section, our method is supposed to minimize domain-divergence. Then, target features are well-aligned with source features with our method. From Fig. 4(f), entropy minimization (ENT) also tries to extract discriminative features, but it fails to find domain-invariant prototypes.
Quantitative Feature Analysis. We quantitatively investigate the characteristics of the features we obtain using the same adaptation scenario. First, we perform the analysis on the eigenvalues of the covariance matrix of target features. We follow the analysis done in 
. Eigenvectors represent the components of the features and eigenvalues represent their contributions. If the features are highly discriminative, only a few components are needed to summarize them. Therefore, in such a case, the first few eigenvalues are expected to be large, and the rest to be small. Obviously, the features are summarized by fewer components in our method as shown in Fig.8(a). Second, we show the change of entropy value on the target in Fig. 8(b). ENT diminishes the entropy quickly, but results in poor performance. This indicates that the method increases the confidence of predictions incorrectly while our method achieves higher accuracy at the same time. Finally, in Fig. 5(c), we calculated -distance by training a SVM as a domain classifier as proposed in . Our method greatly reduces the distance compared to S+T. The claim that our method reduces a domain divergence is empirically supported with this result.
We propose a novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model for semi-supervised domain adaptation (SSDA). Our model consists of a feature encoding network, followed by a classification layer that computes the features’ similarity to a set of estimated prototypes (representatives of each class). Adaptation is achieved by alternately maximizing the conditional entropy of unlabeled target data with respect to the classifier and minimizing it with respect to the feature encoder. We empirically demonstrate the superiority of our method over many baselines, including conventional feature alignment and few-shot methods, setting a new state of the art for SSDA.
This work was supported by Honda, DARPA and NSF Award No. 1535797.
First, we show the examples of datasets we employ in the experiments in Fig 7. We also attach a list of classes used in our experiments on LSDAC with this material.
We provide details of our implementation. We will publish our implementation upon acceptance. The reported performance in the main paper is obtained by one-time training. In this material, we also report both average and variance on multiple runs and results on different dataset splits (i.e., different train/val split).
Implementation of MME. For VGG and AlexNet, we replace the last linear layer with randomly initialized linear layer. With regard to ResNet34, we remove the last linear layer and add two fully-connected layers following . We use the momentum optimizer where the initial learning rate is set 0.01 for all fully-connected layers whereas it is set 0.001 for other layers including convolution layers and batch-normalization layers. We employ learning rate annealing strategy proposed in . Each mini-batch consists of labeled source, labeled target and unlabeled target images. Labeled examples and unlabeled examples are separately forwarded. We sample labeled source and labeled target images and unlabeled target images. is set to be 32 for AlexNet, but 24 for VGG and ResNet due to GPU memory contraints. We use horizontal-flipping and random-cropping based data augmentation for all training images.
Except for CDAN, we implemented all baselines by ourselves. S+T . This approach only uses labeled source and target examples with the cross-entropy loss for training.
. We train a domain classifier on the output of the feature extractor. It has three fully-connected layers with relu activation. The dimension of the hidden layer is set 512. We use a sigmoid activation only for the final layer. The domain classifier is trained to distinguish source examples and unlabeled target examples.
ADR . We put dropout layer with 0.1 dropout rate after l2-normalization layer. For unlabeled target examples, we calculate sensitivity loss and trained to maximize it whereas trained to minimize it. We also implemented with deeper layers, but could not find improvement.
ENT. The difference from MME is that the entire network is trained to minimize entropy loss for unlabeled examples in addition to classification loss.
CDAN . We used the official implementation of CDAN provided in https://github.com/thuml/CDAN. For brevity, CDAN in our paper denotes CDAN+E in their paper. We changed their implementation so that the model is trained with labeled target examples. Similar to DANN, the domain classifier of CDAN is trained to distinguish source examples and unlabeled target examples.
Sensitivity to hyper-parameter . In Fig. 8, we show our method’s performance when varying the hyper-parameter which is the trade-off parameter between classification loss on labeled examples and entropy on unlabeled target examples. The best validation result is obtained when is . From the result on validation, we set 0.1 in all experiments.
Changes in accuracy during training. We show the learning curve during training in Fig 9. Our method gradually increases the performance whereas others quickly converges.
|Network||Method||R to C||R to P||R to A||P to R||P to C||P to A||A to P||A to C||A to R||C to R||C to A||C to P||Mean|
Comparison with virtual adversarial training. Here, we present the comparison with general semi-supervised learning algorithm. We select virtual adversarial training (VAT)  as the baseline because the method is one of the state-of-the art algorithms on semi-supervised learning and works well on various settings. The work proposes a loss called virtual adversarial loss. The loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. We add the virtual adversarial loss for unlabeled target examples in addition to classification loss. We employ hyper-parameters used in the original implementation because we could not see improvement in changing the parameters. We show the results in Table 7. We do not observe the effectiveness of VAT in SSDA. This could be due to the fact that the method does not consider the domain-gap between labeled and unlabeled examples. In order to boost the performance, it should be better to account for the gap.
Analysis of Batch Normalization. We investigate the effect of BN and analyze the behavior of entropy minimization and our method with ResNet. When training all models, unlabeled target examples and labeled examples are forwarded separately. Thus, the BN stats are calculated separately between unlabeled target and labeled ones. Some previous work [3, 17] have demonstrated that this operation can reduce domain-gap. We call this batch strategy as a “Separate BN”. To analyze the effect of Separate BN, we compared this with a “Joint BN” where we forwarded unlabeled and labeled examples at once. BN stats are calculated jointly and Joint BN will not help to reduce domain-gap. We compare ours with entropy minimization on both Separate BN and Joint BN. Entropy minimization with Joint BN performs much worse than Separate BN as shown in Table 8. This results show that entropy minimization does not reduce domain-gap by itself. On the other hand, our method works well even in case of Joint BN. This is because our training method is designed to reduce domain-gap.
Results on Multiple Runs. We investigate the stability of our method and several baselines. Table 10
shows results averaged accuracy and standard deviation of three runs. The deviation is not large and we can say that our method is stable.
Results on Different Splits. We investigate the stability of our method for labeled target examples. Table 9 shows results on different splits. sp0 correponds to the split we use in the experiment on our paper. For each split, we randomly picked up labeled training examples and validation examples. Our method consistently performs better than other methods.
|Network||Method||W to A||D to A|
|Method||R to C||R to P||P to C||C to P||C to S||S to P||R-S||P to R|
|Method||Joint BN||Separate BN|