The ability to model uncertainty is important in unsupervised domain adaptation (UDA). For example, self-training [16, 32] often requires the model to reliably estimate the uncertainty of its prediction on target domain in the pseudo-label selection phase. However, traditional deep neural networks (DNN) can easily assign high confidence to a wrong prediction
often requires the model to reliably estimate the uncertainty of its prediction on target domain in the pseudo-label selection phase. However, traditional deep neural networks (DNN) can easily assign high confidence to a wrong prediction[5, 18], thus are not able to reliably and quantitatively render the uncertainty given data.
On the one hand, Bayesian neural networks (BNN) [20, 5, 1, 12] tackles this problem by taking a Bayesian view of the model training. Instead of obtaining a point estimate of weights, BNN tries to model the distribution over weights. We leverage BNN as a powerful tool for uncertainty estimation. On the other hand, one can estimate the empirical uncertainty of the model by the variance of network parameters, which we call gradient variance regularization (GVR).
Finally, our approach builds on the intuition that a model gives similar uncertainty estimates on the two domains learns to adapt from source to target well. Thus, we propose to directly calibrate the estimated uncertainties between source and target domains during training. This calibration can be considered in three-folds, from which we listed our contributions as follows:
We introduce variational Bayes neural networks to provide reliable uncertainty estimations.
We propose to calibrate the variance of network parameters as a model-and-objective-agnostic regularization (GVR) on the optimization dynamics.
2 Related Work
Shannon entropy is commonly used to quantify the uncertainty of a given distribution. Entropy-based UDA has already been proposed in . Unlike , we avoid using adversarial learning which tends to be unstable and hard to train. Also, entropy regularization is proposed in  for semi-supervised learning and can be directly applied to UDA. However, our framework is more general since the uncertainty is not necessarily to be the Shannon entropy. In fact, we formalize the uncertainty as Rényi entropy which is a generalization of Shannon entropy. Many other methods in UDA can be modeled under this framework, for example, self-train
for semi-supervised learning and can be directly applied to UDA. However, our framework is more general since the uncertainty is not necessarily to be the Shannon entropy. In fact, we formalize the uncertainty as Rényi entropy which is a generalization of Shannon entropy. Many other methods in UDA can be modeled under this framework, for example, self-train[16, 32] can be viewed as minimizing the min-entropy which is a special case of Rényi entropy.
As pointed out by  , directly optimizing the estimated Shannon entropy given data requires the classifier to be locally-Lipschitz
, directly optimizing the estimated Shannon entropy given data requires the classifier to be locally-Lipschitz. Co-DA  and DIRT-T  propose to solve this problem by incorporate the locally-Lipschitz constraint via virtual adversarial training (VAT) .
Another complimentary line of research employs self-ensemble and shows promising results . Indeed, BNN  performs Bayesian ensembling by nature. This is part of the reason why BNN provides a better uncertainty estimation.
In supervised learning, regularization is proposed to avoid overfitting. Besides weight decay, typical regularization techniques include label smoothing [6, 26], network output regularization , knowledge distillation . We believe our proposed gradient variance regularizer GVR can also be used in supervised settings.
3 Uncertainty in Deep Neural Networks
The limiting value of when is the Shannon entropy, and corresponds to the min-entropy, . A typical deep neural network for classification usually produces a discrete distribution over possible classes given the input data. Thus, we quantify the predictive uncertainty by the Rényi entropy on this probability distribution.
Bayesian neural networks. BNN estimates the posterior over network weights while optimizing the training objective. Given the dataset , the output of BNN is denoted as where is input data and are the weights. For a classification task, . The predictive distribution over labels given input is . We define the uncertainty evaluated by BNNs as the entropy .
We adopt the method from , where aleatoric and epistemic uncertainties are jointly modeled. In , the logits are assumed to be Gaussian and the reparameterization trick is utilized. The predicted logit is with . The final predicted probability vector is approximated by Monte Carlo sampling (with samples),
Variational inference. As estimating the posterior is often intractable [1, 12], variational inference is commonly adopted, where the posterior of weights is approximated by with parameter . Specifically, in supervised learning, is estimated by maximizing the evidence lower bound (ELBO) [13, 5]:
where is the prior, and term (I) is the standard cross-entropy loss evaluated at with parameter . Gal et al. [4, 5] proposes to view dropout together with weight decay as a Bayesian approximation, where sampling from is equivalent to performing dropout and term (II) becomes regularization (or weight decay) on .
Gradient variance. Rather than finding a variational approximation of the posterior , one can instead estimate the model-dependent uncertainty by the sample variance of (or the sample variance of in the case of non-Bayesian networks). To be precise, sampling mini-batches
in the case of non-Bayesian networks). To be precise, sampling mini-batchesfrom a batch , one can compute the adapted parameters by performing one gradient step (at ): , where is the objective and is the inner learning-rate. Then the variance of can be defined as the trace of the covariance of vectorized s:
where and denotes a collection or a set. It can be easily seen that regularizing the variance of parameters is essentially regularizing the variance of gradients. We will discuss the usage of this gradient variance as a regularizer as well as its relationship with MAML  in the next section.
4 Domain Adaptation via Calibrating Uncertainties
Rényi entropy regularization. Denote source and target dataset as and respectively, where indicate the samples and is the label in source domain, and . We propose to calibrate the predictive uncertainty of target dataset with the source domain uncertainties. Concretely, we minimize the cross-entropy (CE) loss in the source domain while constraining the predicted entropy in the target domain:
where is the cross-entropy and indicates the strength of the applied constraint. In practice, the network is first pretrained on labeled source dataset using ELBO in Equation 3. Then, unlabeled target data is introduced in the above Equation 5, and is computed from Equation 2. Note that the resulting CE loss is no longer the term (I) in ELBO, since the expectation is inside logarithm. We simply treat “as is” the true posterior and evaluate CE using . For a non-Bayesian network, is used as a replacement of .
To solve Equation 5, rewrite it as a Lagrangian with a multiplier ,
Since , an upper bound on is obtained,
Ideally, Equation 6 can be optimized via dual gradient descent and is jointly updated along with . For simplicity, we follow the work of  and fix as a hyper-parameter in the experiment and minimize the upper bound .
Note that letting in Equation 7 is in fact the (Shannon) entropy regularization as described in [7, 8], except that here we consider a variational BNN. As pointed out in , directly optimizing Equation 7 can be difficult and expectation maximization (EM) algorithms are often used. Proposed in
can be difficult and expectation maximization (EM) algorithms are often used. Proposed in[30, 8], deterministic annealing EM anneals the predicted probabilities as soft-labels and minimizes the resulting cross-entropy. In an extreme case, soft-labels become one-hot vectors and the algorithm terms out to be self-training with pseudo-labels . In our Rényi entropy regularization framework, self-training is essentially optimizing the min-entropy (). Then the objective reads
with to be pseudo-labels in target domain. Subscript denotes the -th element in a given -dim vector. The relationship between and can be immediately realized by noticing that the Shannon entropy is an upper bound of the min-entropy:
We build our method on top of class-balanced self-training (CBST) proposed in  and use it as the backbone of RER. CBST seeks to select most confident predictions pseudo-labels in a self-paced (“easy-to-hard”) scheme, since jointly learning the model and optimizing pseudo-labels on all unlabeled data is naturally difficult. The authors also propose to normalize the class-wise confidence levels in pseudo-label generation to balance the class distribution. For a detailed formulation, we suggest readers referring Section 4.1 and 4.2 in .
Gradient variance regularization. The entropy regularization or self-training framework as formulated above implicitly encourages cross-domain feature alignment. However, pseudo-labels can be quite noisy even if BNN is employed to estimated their reliability. Trusting all selected pseudo-labels as one-hot-encoded “ground-truth” is overconfident and self-training with noisy pseudo-labels can lead to incorrect entropy minimization. Indeed, we observe that the model can quickly converge to its overconfident predictions. Therefore, the parameter variance evaluated in target domain using pseudo-labels via Equation
The entropy regularization or self-training framework as formulated above implicitly encourages cross-domain feature alignment. However, pseudo-labels can be quite noisy even if BNN is employed to estimated their reliability. Trusting all selected pseudo-labels as one-hot-encoded “ground-truth” is overconfident and self-training with noisy pseudo-labels can lead to incorrect entropy minimization. Indeed, we observe that the model can quickly converge to its overconfident predictions. Therefore, the parameter variance evaluated in target domain using pseudo-labels via Equation4 can be even smaller than that of the source domain. To address this problem, we again propose to regularize the self-training by maximizing the gradient variance. Algorithm 1 illustrates the regularized self-training procedure on target domain (the training on source and target domains are preformed alternately, which is omitted in the algorithm box). and are the inner- and outer-stepsize, and is the hyper-parameter weighting the regularization term.
Notice that the proposed GVR shares similarities with MAML  , comparing from a dynamical systems standpoint and despite that MAML samples mini-batches of different tasks. Taking a first-order Taylor expansion of the loss function around
, comparing from a dynamical systems standpoint and despite that MAML samples mini-batches of different tasks. Taking a first-order Taylor expansion of the loss function around,
we demonstrate that MAML tries to maximize the sensitivity of the loss functions with respect to the parameters by maximizing the norms of the gradients. On the contrary, GVR maximizes the variance of gradients, which intuitively encourages the model to escape from bad local minima.
It is worth mentioning that GVR is not only model-agnostic but also objective-agnostic. This is useful when the regularizer itself is the objective to be optimized. Moreover, GVR is complementary to VAT  since in VAT the gradient is computed with respect to input data. We conjecture that the data gradient somewhat captures the aleatoric (data-dependent) uncertainty, which we leave for future work.
We first show results on three digit datasets MNIST , USPS and SVHN , where we consider MNISTUSPS and SVHNMNIST. Then we present preliminary results on a challenging benchmark: VisDA17 (classification)  which contains 12 classes. We follow the standard protocol in [22, 27, 24]. Classification accuracies on source and target domains for base models are reported in Table 3. We use DTN  as our base model for MNISTUSPS and SVHNMNIST. To implement its Bayesian variant (BDTN), we add another classifier to predict the logarithm of variance.
Domain adaptation results on digit datasets are shown in Table 6. Our proposed Rényi entropy regularization methods with non-Bayesian and Bayesian base models are listed as RERs and BRERs respectively. We see self-training with pseudo-labels ((B)RER-) are in general more stable than directly minimizing the Shannon entropy ((B)RER-1). Also, adding GVR in (B)RER- improves the performance. However, we also observe that GVR is not helpful in (B)RER-1 settings.
Mean accuracies on VisDA17 dataset are reported in Table 7. Following the protocol in , we train a standard ResNet101  as the base model and add a second classifier (denoted as BRes101) to predict the logarithm of variance on logits. Results show that BNN improves upon non-Bayesian baselines by a large margin. GVR has not been tested on VisDA17 with (B)Res101 since the memory requirement exceeds our GPU capacities.
|Model||Target mean-Acc (%)||Acc Gain (%)|
In this work, we propose to approach unsupervised domain adaptation via calibrating the predictive uncertainties between source and target domains. The uncertainty is quantified under a general Rényi entropy regularization framework, within which we introduce Bayesian neural networks for accurate and reliable uncertainty estimations. From a frequentist point of view, we in addition propose to approximate the model uncertainty via the sample variance of network parameters (or gradients) during training. Results show that the uncertainty estimation by Bayesian networks and gradient variances is effective and leads to stable performance in unsupervised domain adaptation.
-  (2015) Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424. Cited by: §1, §3.
Model-agnostic meta-learning for fast adaptation of deep networks.
Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126–1135. Cited by: §3, §4.
-  (2017) Self-ensembling for visual domain adaptation. arXiv preprint arXiv:1706.05208. Cited by: §2.
Bayesian convolutional neural networks with bernoulli approximate variational inference. arXiv preprint arXiv:1506.02158. Cited by: §3.
Dropout as a bayesian approximation: representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059. Cited by: §1, §1, §2, §3.
-  (2016) Deep learning. MIT press. Cited by: §2.
-  (2005) Semi-supervised learning by entropy minimization. In Advances in neural information processing systems, pp. 529–536. Cited by: §2, §4.
-  (2006) Entropy regularization. Semi-supervised learning, pp. 151–168. Cited by: 1st item, §2, §4.
-  (2016) Deep residual learning for image recognition. In , pp. 770–778. Cited by: §5.
-  (2017) Beta-vae: learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, Vol. 3. Cited by: §4.
-  (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §2.
-  (2017) What uncertainties do we need in bayesian deep learning for computer vision?. In Advances in neural information processing systems, pp. 5574–5584. Cited by: §1, §3, §3.
-  (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §3.
-  (2018) Co-regularized alignment for unsupervised domain adaptation. In Advances in Neural Information Processing Systems, pp. 9345–9356. Cited by: §2.
-  (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §5.
-  (2013) Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, Vol. 3, pp. 2. Cited by: 1st item, §1, §2, §4.
-  (2015) Learning transferable features with deep adaptation networks. arXiv preprint arXiv:1502.02791. Cited by: Table 7.
-  (2017) Multiplicative normalizing flows for variational bayesian neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2218–2227. Cited by: §1.
-  (2015) Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677. Cited by: §2, §4.
-  (2012) Bayesian learning for neural networks. Vol. 118, Springer Science & Business Media. Cited by: §1.
-  (2011) Reading digits in natural images with unsupervised feature learning. Cited by: §5.
-  (2018) VisDA: a synthetic-to-real benchmark for visual domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2021–2026. Cited by: Table 7, §5.
-  (2017) Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548. Cited by: §2.
-  (2018) Generate to adapt: aligning domains using generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8503–8512. Cited by: Table 7, §5.
-  (2018) A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735. Cited by: §2.
-  (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §2.
-  (2017) Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: §5.
-  (2018) ADVENT: adversarial entropy minimization for domain adaptation in semantic segmentation. arXiv preprint arXiv:1811.12833. Cited by: §2.
-  (2018) Rényi entropy — Wikipedia, the free encyclopedia. Note: [Online; accessed 13-May-2019] External Links: Cited by: §3.
-  (1994) Statistical physics, mixtures of distributions, and the em algorithm. Neural Computation 6 (2), pp. 334–340. Cited by: §4.
-  (2015) Deep transfer network: unsupervised domain adaptation. arXiv preprint arXiv:1503.00591. Cited by: Table 3, Table 6, §5.
-  (2018) Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 289–305. Cited by: §1, §2, §4, §5.