“Better than a thousand days of diligent study is one day with a great teacher.”
It is a common belief that human students require far fewer training examples than any learning machine . No doubt this has to do with the fact that effective teachers provide much more than the correct answer to their pupils; they provide an explanation in addition to the result.
In a typical machine learning setup, we present tuples
to a machine learning model. One way to introduce an “explanation” to a supervised learning system would be to provide some sort of privileged information, which we entitle. In practice, one can incorporate the triplets into a learning system at training time and in the continue to make use of only in the testing stage, without any access to . In other words, the “Student” has access to privileged information while interacting with the “Teacher” during training, but in the test stage the “Student” operates without the supervision of the “Teacher”. This paradigm is called Learning Under Privileged Information (LUPI) and was introduced by Vapnik and Vashist .
Vapnik and Vashist 
provide a LUPI algorithm for Support Vector Machines (SVMs). From an algorithmic perspective, the privileged information is utilized to estimate slack values of the SVM constraints. From a theoretical perspective, this algorithm accelerates the rate at which the upper bound on error drops fromto a far steeper curve of , where is the number of required samples.
Privileged information is ubiquitous: it usually exists for almost any machine learning problem. However, we do not see wide adoption of such methods in practice. The major obstacle is the fact that the original LUPI framework proposed in 
is only valid for SVM-based methods. Indeed, many have shown that the privileged information can be introduced into the loss function under a multi-task or a distillation loss in an algorithm-agnostic way. However, we raise the question, could it andshould it be fed in as an input instead of an additional task? If so, how would we go about doing so in an algorithm-agnostic way?
We define a new class of LUPI algorithms by making a structural specification. We consider a hypothesis class such that each hypothesis is a combination of two functions – namely, a deterministic function taking as an input, and a stochastic function taking as an input. When is not available in the test stage, the “Student” simply makes a Bayes optimal decision and marginalizes the model over . Our structural specification makes this marginalization straightforward while not compromising the expressiveness of the model. This structure is natural in the context of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) thanks to dropout. Dropout is a widely adopted tool to regularize neural networks by multiplying the activations of a neural network at some layer with a random vector. We simply extend the dropout to heteroscedastic dropout by making its variance a function of the privileged information. In other words, dropout becomes the stochastic function taking as an input and marginalizing the function corresponds to not utilizing dropout in the test phase. In order to be able to train the heteroscedastic dropout, we use Gaussian dropout instead of Bernoulli because the key technical tool we use is the re-parameterization trick  which is only available for some specific distributions, including the Gaussian.
The rationale behind heteroscedastic dropout follows the close relationship between Bayesian learning and dropout presented by Gal and Gharamani. Dropout can be considered a tool to approximate the uncertainty of the output of a neural network. In our proposed heteroscedastic dropout, the privileged information is used to estimate this uncertainty so that hard examples and easy examples are treated accordingly during training. Our theoretical study suggests that the accurate computation of a model’s uncertainty can accelerate the rate at which a CNN’s upper bound on error drops, from the typical rate of to a faster , where is the number of training examples. In an oracle case for a dataset with training examples, this theoretical upper bound would allow us to learn a model with identical generalization error with samples instead of K and is thus hugely significant. Although the practical gain we observe is nowhere close, it is still very significant.
We evaluate our method in experiments with both CNNs and RNNs, and show a significant accuracy improvement over two canonical problems, image classification and multi-modal machine translation. As privileged information, we offer a bounding box for image classification and an image of the scene described in a sentence for machine translation. Our method is problem- and modality-agnostic and can be incorporated as long as dropout can be utilized in the original problem and the privileged information can be encoded with an appropriate neural network.
2 Related Work
The key aspects that differentiate our work from the literature are: our method is applicable to any deep learning architecture which can utilize dropout, we do not use a multi-task or distillation loss, we provide theoretical justification suggesting higher sample efficiency, we perform experiments for both CNNs and RNNs. A thorough review of the related literature is provided below.
. It extends the Support Vector Machine (SVM) by empirically estimating the slack values via privileged information. This method is further applied to various computer vision problems[33, 31, 11] as well as ranking , clustering  and metric learning  problems. These method are based on max-margin learning and are not applicable to CNNs or RNNs.
One closely related work is , extending Gaussian processes to the LUPI paradigm. Hernández-Lobato et al.  use privileged information to estimate the variance of the noise in their model. Similarly, we use the privileged information to control the variance of the dropout in CNN and RNN models. However, their method only applies to Gaussian processes, whereas we target neural networks.
Learning CNNs Under Privileged Information: The LUPI paradigm has also been studied recently in the context of CNNs. In contrast to max-margin methods, the literature on learning CNNs under privileged information heavily uses the distillation framework, following the close relationship between distillation and LUPI studied in .
Hoffman et al. demonstrated a multi-modal distillation approach to incorporating an additional modality as side information . They start with a pre-trained network and distill the information from the privileged network to a main neural network in an end-to-end fashion.
Multi-task learning is a straightforward approach to incorporate privileged information. However, it does not necessarily satisfy a no-harm guarantee (i.e. privileged information can harm the learning). More importantly, the no-harm guarantee will very likely be violated since estimating the privileged information (i.e. solving the additional task) might be even more challenging than the original problem.
When the privileged information is binary and shares the same spatial structure as the original data, such as is the case with segmentation occupancy or bounding box information, it can also directly be incorporated into the training of CNNs by masking the activations. Group Orthogonal neural networks  follow this approach. However, this approach is limited to very specific class of problems.
The loss value of a CNN can be viewed as analogous to the SVM slack variables. Following this analogy, Yang et al.  use two networks: one for the original task, and one for estimating the loss using the privileged information. Learning occurs through parameter sharing between them.
Our method is different from aforementioned works since we do not use either a distillation or a multi-task loss.
Learning Language under Privileged Visual Information: Using images as privileged information to learn language is not new. Chrupala et al. used a multi-task loss while learning word embeddings under privileged visual information. The embeddings are trained for the task of predicting the next word, as well the representation of the image. Analysis of this model [5, 19] suggests that the embeddings learned by using vision as a privileged information are significantly different than language only ones and correlate better with human judgments. Recently, Elliott et al. 
collected a dataset of images with English captions as well as German translations of captions. Using this dataset, a neural machine translationunder privileged information model is developed following the multi-task setup .
Dropout and its Variants: Dropout is a well studied regularization technique for training deep networks. To-the-best of our knowledge, we are the first to specifically utilize privileged information to control the variance of a dropout function. Here, we summarize the existing methods which control the variance of the dropout using variational inference or information theoretical tools. Although these tools have never been applied to the LUPI paradigm, we utilize some of the technical tools developed in these works.
We use multiplicative Gaussian dropout instead of Bernoulli dropout. Gaussian dropout is first introduced in . Its variational extension  uses local re-parameterization to perform Bayesian learning.
The Information Bottleneck (IB)  is a powerful framework which can enforce various structural assumptions. The IB framework has been applied to CNNs and RNNs using stochastic gradient variational Bayes and the re-parametrization trick . Perhaps closest to our method, Achille and Soatto  use the information bottleneck principle to learn disentangled representations when a CNN with Gaussian Dropout is used. The authors introduce many ideas upon which we build; specifically, our hypothesis class (Eqn. 4) is very similar to the architecture they propose. The main architectural difference is their choice to define the variance as a function of , whereas we make it a function of . We also use similar distributional priors and a similar training procedure. On the other hand, we apply these ideas to a completely different problem with a different theoretical analysis. The information bottleneck has been applied to LUPI for SVMs . However, this method does not apply to neural networks.
Consider a machine learning problem defined over a compact space and a label space . We also consider a loss function which compares a prediction with a ground truth label. In learning under privileged information, we also have additional information for each data point defined over a space , which is only available during the training. In other words, we have access to i.i.d. samples from the data distribution as during training. However, in test we will only be given . Formally, given a function class parameterized by and data , a typical aim is to solve the following optimization problem;
We propose to do so by learning a multi-view model using both and and to use the marginalized model in test when is not available. Consider a parametric function class for the multi-view data . The training problem becomes:
This is equivalent to a classical supervised learning problem defined over a space and any existing method like CNNs can be used. In order to solve the inference problem, we consider the following marginalization
The major problem in this formulation is the intractability of this expectation, as is unknown. We propose to restrict the class of functions in a way that the expectation is straightforward to compute. The form we propose is a parametric family such that the privileged information controls the variance, whereas the main information (i.e. information available in both training and test) controls the mean. The specific form we use is:
where represents the Hadamard product and the stochastic function
is a normal random variable with a constant mean function and a covariance function parameterized byand . We also decompose as two disjoint vectors as . Moreover, in this formulation, the expectation defined in (3) becomes straightforward and can be shown to be . We visualize this structural specification in Figure 2.
We use neural networks to represent and and learn their parameters using the information bottleneck. Since the output space is discrete (we address classification), we denote the representation of the data as and compute the output as . We explain the details of training in the following sections.
3.1 Information Bottleneck for Learning
We need to control the role of in LUPI. The information bottleneck has already been used for this purpose ; however, we do not need this explicit specification because our structural specification directly controls the role of . We use the information bottleneck for a rather different reason, its original reason, learning a minimal and sufficient joint representation of which captures all the information about . This is similar to , and we use the same log-Normal assumption. The Lagrangian of the information bottleneck can be written as (see  for details);
where is the joint representation of computed as . These terms can be computed as;
where represents the distributions computed over our model with parameters . In order to compute the KL divergence, we need an assumption about the prior over representations . As suggested by 
, the log-Normal distribution follows the empirical distribution
when ReLu is used. Hence, we use the log-Normal distribution and compute the KL divergence as(see appendix for full derivation);
Combining them, the final optimization problem is;
This minimization is simply the cross-entropy loss with regularization over the logarithm of the computed variances of the heteroscedastic dropout, and can be performed via the re-parametrization trick in practice when and are defined as neural networks. We further justify the choice of IB regularization via experimental observation: without it, optimization leads to NaN loss values. We discuss the details of the re-parametrization trick in the following sections.
In this section, we discuss the practical implementation details of our framework, specifically pertaining to image classification with CNNs and machine translation with RNNs. For the classification setup, we use the image as , object localization information as , and image label as . For the translation setup, we use the sentence in the source language as , an image which is the realization of the sentence as , and the sentence in the target language as .
We make a sequence of architectural decisions in order to design and . For the classification problem, we design both of them as CNNs and share the convolutional layers. The inputs are , an image, and , an image with a blacked-out background. We use the VGG-Network  as an architecture and simply replace each dropout with our form of heteroscedastic dropout. We show the details of the architecture with the re-parameterization trick in Figure 4
. We also normalize images with the ImageNet pixel mean and variance. As data augmentation, we horizontally flip images from left to right and make random crops.
We use a two-layered LSTM architecture with 500 units as our RNN cell and use heteroscedastic dropout between layers of LSTMs. The main reason behind this choice is the fact that dropout in general has only been shown to be useful for connections between LSTM layers. We use attention  and feed the image as a feature vector computed using the VGG architecture pre-trained on ImageNet. We give the details of the LSTM with re-parametrization trick in Figure 3. For the inference, we use beam search over 12 hypotheses. Our LSTM implementation directly follows the baseline implementation provided by OpenNMT .
Hyperparameter Settings We use a standard learning rate across all image classification experiments, setting our initial learning rate to
, and tolerating 5 epochs of non-increasing validation set accuracy before decaying the learning rate byx. For multi-modal machine translation, we use an initial learning rate of and halve the learning rate every epoch after the 8 epoch. We use the ADAM 
optimizer in PyTorch for both image classification and multi-modal machine translation. All CNN weights are initialized according to the method of Heet al.  and a decay of was used for image classification. For multi-modal machine translation, we do not use any weight decay and initialize weights according to .
4 Experimental Results
In order to evaluate our method, we perform various experiments using both CNNs and LSTMs. We test our method with CNNs for the task of image classification and with LSTMs for the task of machine translation. In the rest of this section, we discuss the baselines against which we compare our algorithm and the datasets we use.
Datasets: We perform our experiments using the following datasets; ImageNet : A dataset of 1.3 million labelled images, spanning 1000 categories. We only use the subset of 600 thousand images which include localization information. Multi-30K: A dataset of 30 thousand Flickr images which are captioned in both English and German. We use this dataset for multi-modal machine translation experiments. In Multi-30K, whereas the English captions are directly annotated for images, the German captions are only translations of the English captions. Hence, during the ground truth translation, the images were privileged information never seen by the translators. This property makes this dataset a perfect benchmark for LUPI.
Baselines: We compare our method against the following baselines. No-: a baseline model not using any privileged information. Gaussian Dropout : A multiplicative Gaussian dropout with a fixed variance. Multi-Task: We perform multi-task learning as a tool to utilize privileged information. We compare both regression to bounding box coordinates and denote it as Multi-Task w/ B.Box, as well as direct estimation of the RGB mask and denote it as Multi-Task w/ Mask. We use this self-baseline only for CNNs since there are many published multi-task methods for machine translation with multi-modal information and we compare with them all. In addition to these self-baselines, we also compare with the following published work: GoCNN: a method for CNNs with segmentation as privileged information which proposes to mask convolutional weights with segmentation masks. Information Dropout: a regularization method that utilizes injection of multiplicative noise in the activations of a deep neural network (but as a function of the input , not ). MIML-FCN: a CNN-based LUPI framework designed for multi-instance problems. Our problem is not multiple instance; however, we still make a comparison for the sake of completeness. Modality Hallucination: Distillation-based LUPI method designed for multi-modal CNNs. Imagination : Distillation-based LUPI method designed for multi-modal machine translation (see appendix for implementation details).
4.1 Effectiveness of Our Method
We compare our method with the No- baseline for image classification using the ImageNet dataset. We perform experiments by varying the number of training examples logarithmically. This is key since the main motivation behind our LUPI method is learning with less data rather than having higher accuracy. We report several results in Table 1 and visualize additional data points in Figure 5.
|Number of Training Images|
|Single Crop top-1|
|Single Crop top-5|
Our method is quite effective for a training dataset size of images; however, it has no positive impact for the and cases. Even more importantly, the smaller the training set, the larger the improvement. For images, it results in single crop top-1 accuracy improvement; whereas, for , it matches the performance. This simply suggests that our algorithm is particularly effective for low- and mid-scale datasets. This result is quite intuitive since with increasing dataset size, all algorithms can effectively learn and reach an optimal accuracy which is possible under the model class. Hence, the role of an “intelligent teacher” is to provide privileged information to learn with less data. In other words, LUPI is not a way to gain extra accuracy regardless of the dataset; rather, it is a way to significantly increase the data efficiency. We do not perform a similar experiment for machine translation since the available dataset is a mid-scale and our LUPI method demonstrates asymptotic accuracy increases at full dataset.
4.2 Data Efficiency of Our Method and Baselines
In order to compare the data efficiency gain of our method against baselines, we perform image classification and multi-modal machine translation experiments. We use ImageNet images since our main goal is identify insights regarding data sample efficiency gains and using a smaller training set makes this analysis possible. We summarize the image classification experiments in Table 4 and multi-modal machine translation experiments in Table 3. Our method outperforms all baselines for both tasks, for image classification with a significant margin.
|Modal. Hallucination ||37.66||63.15||40.45||65.95|
|Info. Dropout ||38.09||63.52||41.84||67.47|
|Gaussian Dropout ||38.80||63.64||41.0||65.3|
|Multi-Task w/ Bbox||39.96||64.79||42.4||66.6|
|Multi-Task w/ Mask ||40.48||65.62||43.18||67.68|
|No (following )|
|Toyama et al. |
|Hitschler et al. |
|Calixto et al. |
|Calixto et al. |
Image Classification with Privileged Localization
One interesting result is that our network is clearly learning much more than to predict a random constant for its output , the covariance matrix used for reparameterization; in fact, our network outperforms the network that produces a whose entries are drawn from pure Gaussian noise by . We analyze our method for CNNs both theoretically and qualitatively in Section 5 and conclude that our method learns to control the uncertainty of the model and results in an order of magnitude higher data efficiency, explaining this large margin.
Furthermore, GoCNN , an architecture specifically designed for the problem of learning with segmentation data using various architectural decisions, results in a significant accuracy improvement competitive with our method in a small dataset regime. However, GoCNN’s performance relative to other baselines begins to degrade at a dataset size of 200K images, leading to a top-1 accuracy decrease of in comparison with Bernoulli dropout and with our heteroscedastic dropout method. This is an intuitive result because GoCNN’s rigid architectural decisions inject significant bias into the model.
Because Information Dropout relies upon sampling from a log-normal distribution with varying variance, it is heteroscedastic. However, compounded Information Dropout layers which exponentiate samples from a normal distribution lead to unbounded activations; thus, a suitable squashing function like the sigmoid must be employed to bound the activations. We find its performance can actually decrease accuracy when compared with a ReLU nonlinearity.
Multi-modal Machine Translation Our method results in a significant accuracy improvement measure by both BLEU and METEOR scores. One interesting observation is that our method outperforms various multi-modal methods which use image information in both training and test. This counterintuitive result is due to the way in which the dataset is collected. The Multi-30k dataset is collected by simply translating the English captions of images into German. A LUPI model  was already shown to perform better than multi-modal translation models which can use both images and sentences at test time [3, 4, 17, 38]. This surprising result is largely due to the fact that the translators did not see the images while providing ground truth translations. More importantly, the effectiveness of visual information in machine translation in a privileged setting is also intuitive following the results of . Chrupala et al.
show that when image information is used as privileged information in the learning of word representations, the quality of such representations increases. Hence, a multi-modal paradigm for learning language (e.g. with privileged visual information) and vice versa is a fruitful direction for both natural language processing and computer vision communities and our method performs quite effectively on this task.
In summary, our results overperform all baselines for both multi-modal machine translation and image classification experiments using both CNNs and RNNs. These results suggest that our method is effective and generic.
4.3 Learning under Partial Privileged Information
Although privileged information naturally exists for many problems, it is typically not available for all points. Thus, it is common to encounter a scenario in which the entire training data is labelled; however, only a small portion includes privileged information. In other words, we typically have a dataset which is the union of and where . In order to experiment with this setting, we vary the amount of available. We present the result in Figure 6 for machine translation and in the appendix for image classification.
The results in Figure 6 suggest that even when only a small portion ( for machine translation, for image classification) of the data has privileged information, our method is effective resulting in a significant accuracy increase very similar to the one we obtained with % of privileged information.
5 Analysis of the Algorithm
Our empirical analysis suggests a strong data-efficiency increase when privileged information is incorporated using our method. It is interesting to quantify this increase in terms of the theoretical learning rate. For the case of SVMs, Vapnik et al.  showed that utilizing privileged information can result in a generalization error bound with rate instead of where is the dataset size. Our experimental results suggests a similar story for CNNs, but the theoretical justification can not be extended from  since their analysis is specific to SVMs. In this section, we endeavor to answer this question for our algorithm. We show that our method is capable of converting an error rate (derived in Proposition 1) into in an oracle setting for CNNs. We rigorously prove that it is possible to reach an rate using our structural assumptions; however, we do not provide any argument for the optimization landscape. In other words, our results are only valid with an oracle optimizer which can find the solution satisfying our assumptions. The study of the loss function and the optimization remains an open problem; however, we provide strong empirical evidence that using SGD with information bottleneck regularization enables faster learning.
We start by presenting a bound over the generalization error of CNNs with no privileged information. This result directly follows from , and we include it here for the sake of completeness. Loss functions of CNNs based on
distance are Lipschitz continuous when the non-linearity is the rectified linear unit, the pooling operation is max-pooling and the softmax function is used to convert activations into logits. Moreover, any learning algorithm with a Lipschitz loss function admits the following result;
Proposition ([43, Example 4])
Given i.i.d. samples drawn from as , if a loss function is -Lipschitz continuous function of for all , bounded by and has a covering number
, then with probability at least,
This proposition simply details the baseline error rate under no privileged information. In order to intuitively explain how our algorithm can accelerate this learning to , consider the following oracle algorithm. Using the privileged information, one can estimate the uncertainty (variance) of the neural network and can use the inverse of this estimate as a the variance of the heteroscedastic dropout. Since the heteroscedastic dropout is multiplicative, this results in unit variance regardless of the input. In a similar fashion, this oracle algorithm can bound the variance with an arbitrary constant. Following this oracle algorithm, we show that when the variance is properly controlled, our method can reach an rate. Consider the population distribution of number images per class versus the empirical distribution as . The value is purely a property of the way in which a was dataset collected and must be treated independently of the learning. Hence, we do not study the rate at it vanishes. We present the following proposition and defer its proof to the appendix:
Given i.i.d. samples drawn from as and a loss function defined as where is a CNN, assume that any path between input and output has maximum weight , the total number of paths between input and output is , and for all training points , and . With probability at least ,
This proposition means that learning with sample efficiency is indeed possible as long as one can bound the variance of the output() with an arbitrary number . Hence, full control of the output variance makes learning with higher sample efficiency possible. A question remains: is it possible to learn this oracle solution by using SGD with information bottleneck regularization? Unfortunately, we have no theoretical answer for this question and leave it as an open problem. However, we study this problem empirically and show that there is a strong empirical evidence suggesting that the answer is affirmative.
A realistic estimate of variance is typically not possible without a strong parametric assumption; however, we can use the simple heuristic that samples from the validation set that our algorithm mis-classifies should have higher variance than the samples which are correctly classified. We plot the average energy of computed dropout variances per fully connected neuron for mis-classified and correctly classified examples in Figure7. Interestingly, our method consistently assigns larger multiplier (dropout) values for correctly classified samples and significantly smaller values for mis-classified samples. This strongly supports our hypothesis since when the low heteroscedastic dropout is multiplied with the high-variance mis-classified examples, their final variance will be low, possibly bounded by the .
We described a learning under privileged information framework for CNNs and RNNs. We proposed a heteroscedastic dropout formulation by making the variance of the dropout a function of privileged information.
Our experiments on image classification and machine translation suggest that our method significantly increases the sample efficiency of both CNNs and LSTMs. We further provide an upper bound over the generalization error of CNNs suggesting a sample efficient learning (with rate ) in the oracle case when privileged information is available. We make our learned models as well as the source code available 111http://svl.stanford.edu/projects/heteroscedastic-dropout222https://github.com/johnwlambert/dlupi-heteroscedastic-dropout.
We thank Alessandro Achille for his help on comparison with information dropout. We acknowledge the support of Toyota (1191689-1-UDAWF), MURI (1186514-1-TBCJE); Panasonic (1192707-1-GWMSX).
-  A. Achille and S. Soatto. Information dropout: Learning optimal representations through noisy computation. arXiv preprint arXiv:1611.01353, 2016.
-  D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
-  I. Calixto, Q. Liu, and N. Campbell. Doubly-attentive decoder for multi-modal neural machine translation. arXiv preprint arXiv:1702.01287, 2017.
-  I. Calixto, Q. Liu, and N. Campbell. Incorporating global visual features into attention-based neural machine translation. arXiv preprint arXiv:1701.06521, 2017.
-  G. Chrupała, A. Kádár, and A. Alishahi. Learning language through pictures. arXiv preprint arXiv:1506.03694, 2015.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
-  M. Denkowski and A. Lavie. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation, 2014.
-  D. Elliott, S. Frank, K. Sima’an, and L. Specia. Multi30k: Multilingual english-german image descriptions. pages 70–74, 2016.
-  D. Elliott and Á. Kádár. Imagination improves multimodal translation. CoRR, abs/1705.04350, 2017.
-  J. Feyereisl and U. Aickelin. Privileged information for data clustering. Information Sciences, 194:4–23, 2012.
-  J. Feyereisl, S. Kwak, J. Son, and B. Han. Object localization based on structural svm using privileged information. In Advances in Neural Information Processing Systems (NIPS)), pages 208–216. 2014.
-  S. Fouad, P. Tino, S. Raychaudhury, and P. Schneider. Incorporating privileged information through metric learning. IEEE Transactions on Neural Networks and Learning Systems, 24(7):1086–1098, 2013.
-  Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, pages 1050–1059, 2016.
-  Y. Gal and Z. Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019–1027, 2016.
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
Computer Vision and Pattern Recognition, June 2016.
-  D. Hernández-lobato, V. Sharmanska, K. Kersting, C. H. Lampert, and N. Quadrianto. Mind the nuisance: Gaussian process classification using privileged noise. In Advances in Neural Information Processing Systems 27, pages 837–845. 2014.
-  J. Hitschler, S. Schamoni, and S. Riezler. Multimodal pivots for image caption translation. arXiv preprint arXiv:1601.03916, 2016.
-  J. Hoffman, S. Gupta, and T. Darrell. Learning with side information through modality hallucination. In Computer Vision and Pattern Recognition, June 2016.
-  A. Kádár, G. Chrupała, and A. Alishahi. Representation of linguistic form and function in recurrent neural networks. Computational Linguistics, 2017.
-  K. Kawaguchi, L. P. Kaelbling, and Y. Bengio. Generalization in deep learning. arXiv preprint arXiv:1710.05468, 2017.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. In The International Conference on Learning Representations, 2015.
-  D. Kingma and M. Welling. Auto-encoding variational bayes. In The International Conference on Learning Representations, 2014.
-  D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pages 2575–2583, 2015.
-  G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. Opennmt: Open-source toolkit for neural machine translation. In Proc. ACL, 2017.
-  P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, et al. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177–180. Association for Computational Linguistics, 2007.
-  D. Lopez-Paz, L. Bottou, B. Schölkopf, and V. Vapnik. Unifying distillation and privileged information. In International Conference on Learning Representations, 2016.
-  S. Motiian, M. Piccirilli, D. A. Adjeroh, and G. Doretto. Information bottleneck learning using privileged information for visual recognition. In Conference on Computer Vision and Pattern Recognition, pages 1496–1505, 2016.
-  H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In IEEE International Conference on Computer Vision, pages 1520–1528, Dec 2015.
-  K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
-  V. Sharmanska and N. Quadrianto. In the era of deep convolutional features: Are attributes still useful privileged data? In Visual Attributes, pages 31–48. Springer, 2017.
-  V. Sharmanska, N. Quadrianto, and C. H. Lampert. Learning to rank using privileged information. In Proceedings of the International Conference on Computer Vision, pages 825–832, 2013.
-  V. Sharmanska, N. Quadrianto, and C. H. Lampert. Learning to transfer privileged information. arXiv preprint arXiv:1410.0389, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
-  N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.
-  N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000.
-  N. Tishby and N. Zaslavsky. Deep learning and the information bottleneck principle. In IEEE Information Theory Workshop (ITW), pages 1–5, April 2015.
-  J. Toyama, M. Misono, M. Suzuki, K. Nakayama, and Y. Matsuo. Neural machine translation with latent semantic of image and text. arXiv preprint arXiv:1611.08459, 2016.
-  J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12(4):389–434, 2012.
-  A. v. d. Vaart and J. Wellner. Weak convergence and empirical processes with applications to statistics. Journal of the Royal Statistical Society-Series A Statistics in Society, 160(3):596–608, 1997.
-  V. Vapnik and R. Izmailov. Learning using privileged information: Similarity control and knowledge transfer. Journal of Machine Learning Research, 16:2023–2049, 2015.
-  V. Vapnik and A. Vashist. A new learning paradigm: Learning using privileged information. Neural Networks, 22(5–6):544 – 557, 2009.
-  H. Xu and S. Mannor. Robustness and generalization. Machine learning, 86(3):391–423, 2012.
-  H. Yang, J. Tianyi Zhou, J. Cai, and Y. Soon Ong. Miml-fcn+: Multi-instance multi-label learning via fully convolutional networks with privileged information. In Computer Vision and Pattern Recognition, July 2017.
J. F. S. Y. Yunpeng Chen, Xiaojie Jin.
Training group orthogonal neural networks with privileged
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 1532–1538, 2017.
Appendix A Proof of Proposition 1
The proof of proposition 1 is available at  as Example 4. However, we include here a simpler proof for the sake of completeness.
We will start with
In , we use the fact that the space has an -cover; and denote the cover as such that each has diameter at most . We further define an auxiliary variable and and used the triangle inequality. In , we use to represent . Finally, in we use the fact that each ball has diameter at most and the loss function is -Lipschitz.
We can bound with a maximum loss and use the Breteganolle-Huber-Carol inequality (cf Proposition A6.6 of ) in order to bound .
Combining all, we observe that with probability at least ,
Appendix B Proof of Proposition 2
The proof of Proposition 2 will closely follow the proof of Proposition 4 and Lemma 5 in . Our main technical tool will be controlling the variance in Bernstein-type bounds to obtain an upper bound which has rate . Consider the output of a CNN, given an image as , with abuse of notation (we used to represent the representation layer, however for the sake of consistency with  we denote as the output here). Every activation in the neuron can be written as a sum over the paths between the input layer and the representation as , where is the input neuron connected to the path and is the weight of the path. One interesting property is the fact that this weight is simply the multiplication of all weights over the path with a binary value. When only max-pooling and ReLU non-linearities are used, that binary value is 1 if all activations are on and 0 if at least one of them is off. This is due to the fact that max-pooling and ReLU either multiply the input with a value of or
. We call this binary variable. Hence, each entry is;
We can note as a vector with dimension equal to the number of paths. Next, we can explicitly compute the generalization bound over loss as;
We will separately bound each term in the following sub-sections. We first need to prove a useful lemma we will use in the following proofs.
Theorem 1.4 by Tropp  states that for all ,
By using the assumption that ,
and substituting , the result is implied.
This will follow directly from the Matrix form of the Bernstein inequality, which is stated as Lemma 1. By using and as defined in the main text, Lemma 1 shows that with probability at least ,
By using the definition of the Matrix norm and the Cauchy-Schwarz Inequality, one can show that;
We need to bound . In order to bound this term, we will first use the fact that is a 1-hop vector and the fact that if , and otherwise. Hence,
where . Using this fact, we can state;
In (a), we noted the training examples from class as . We can use the Bernstein inequality, which states that
where . Using the assumption that , we can state that with probability at least ,
Here is a term which bounds the variance of the class label distribution, which is defined in the next section.
This term is both independent of the learning algorithm and the weights learned and can be simply made to vanish to zero if the number of samples per class directly follows the population densities. Hence, we do not include a specific rate for this quantity and simply denote it with and assume that it goes to with a rate better or equivalent to a linear rate. See the main text for a detailed explanation as to why we choose to not include in the analysis.
After bounding each term in (2), we can now state the proof for Proposition 2.
By using the decomposition in (2) and the bounds in (4,7), we can state that
Appendix C Derivation of (7, Main Paper)
In equation (7) of the main paper, we stated that
In this section, we formally derive this claim using the log-Uniform assumption. In order to compute , we need to choose a prior distribution for . As discussed in depth in 
, the use of ReLU activations empirically suggests that a good choice for this prior would be the log-uniform distribution. Hence, we consider the log-Uniform prior. We first use the definition of the KL-divergence as;
Since we know the distribution of as , and using the assumption that the covariance matrix is diagonal,
If we use the log-uniform prior, the first term in the KL-divergence can be computed as;
where we use the fact that the logarithm of the pdf of a log-uniform distribution is with appropriate constants. Furthermore, the norm of does not affect the output as it is followed with a soft-max operation, which is invariant up-to a scalar multiplication. Hence, we can safely consider its norm to be a constant . Using both terms,
with appropriate constants and . We do not include in the optimization since an additional constant does not change the result of the optimization and we include in the trade-of parameter .
Appendix D Additional Results
In this section, we provide two experimental results missing in the paper: first, an analysis of accuracy vs. the amount of provided for image classification and second, a comparison of our method with baselines for the task of ImageNet image classification with K images. We also provide further qualitative analysis of the relationship between variance control and our method.
Accuracy vs. Partial for Image Classification:
In the main text, we already studied the case where only a partial is available and showed that even a small percentage of is enough for multi-modal machine translation experiments. Due to the limited space, we provide the same experiment for the image classification here in Figure 8 and show that as long as a small percentage of the dataset has privileged information, our algorithm is effective.
ImageNet with 200K Images
In the main paper, we performed the ImageNet image classification experiment with only 75K training images (a mid-sized dataset) and showed that our method learns significantly faster (by reaching higher accuracy) than all baselines. One might ask, would the result still hold if we had up to 200K images (a larger-sized dataset)? In order to answer this question, we carry out further experiments and show the results in Table 4; the results suggest that our method matches the performance of the best baselines in the 200K case. Hence, we can conclude that our method, i.e. marginalization, provides no harm in a large-scale dataset regime.
|Modal. Hallucination ||52.28||76.33||55.66||78.79|
|Info. Dropout ||54.60||77.89||58.47||81.25|
|Gaussian Dropout ||55.48||78.87||58.11||80.58|
|Multi-Task w/ Mask ||56.22||79.56||59.39|
|Multi-Task w/ Bbox||81.48|
Appendix E Additional Qualitative Analysis of the Method
In Figure (9), we visualize the computed variance of the heteroscedastic dropout.
Figure (9) supports our hypothesis that our algorithm controls the variance since mis-classified examples are expected to have high variance/uncertainty and need to be multiplied with a low value to be controlled. The visualization is fairly uniform, especially for misclassified examples, but we believe the has interesting information in it which can be further utilized in applications like confidence estimation and is an interesting future work direction.
Appendix F Additional Implementation Details
In this section, we give all of the implementation details of our algorithm, as well as the implementation details of the baselines we used in our experimental study. In order to ensure full reproducibility of all experiments, we share our source code 333https://github.com/johnwlambert/dlupi-heteroscedastic-dropout. We found that in all experiments that could converge, from training set sizes of up to images, an adaptive learning rate decay schedule significantly outperforms the traditional 30-epoch fixed learning rate decay schedule. We consistently observe performance gains of with the learning rate schedule set adaptively according to whether or not performance on the hold-out validation set has reached a plateau. Unless otherwise noted, we utilize SGD with momentum set to 0.9 for all models, and a learning rate schedule that starts at , as  suggests.
Heteroscedastic Dropout Implementation:
We set in all experiments, although we found this was not a meaningful hyperparameter. We found the training to be prone to convergence in local optima and restarted training if the distribution over class logits was still uniform after 30 epochs. We use a weight decay of in all experiments, ADAM, and a learning rate of , as described in Section 3.2 of the paper. We cropped images to a standard size of before feeding them into the network.
We scale the batch size with respect to the size of the training set. For example, for the K model, we use a batch size of 64. For the K Model, we use a batchsize of 128. For the K model, we utilize curriculum learning and a batch size of 256. We first train the fc layers in the tower for 8 epochs with ADAM, a batch size of 128, and a learning rate , and then fix the fc weights and fine-tune the fc layers of the tower with the ADAM optimizer and a learning rate of and a batch size of 256.
A baseline model without access to any privileged information. We use a batch size of 256.
Gaussian Dropout :
We draw noise from because the authors of  state that should be set to . We did not include a regularization loss on the covariance matrices of the random noise. We use SGD with momentum set to 0.9, a learning rate of , and a batch size of 256.
Multi-Task with Bbox:
We add one extra head to the VGG network that, just as the classification head, accepts pool5 activations. This regression head produces the center coordinates () and width and height of a bounding box, all normalized to . As our loss function, we use a weighted sum of cross entropy loss and times the bounding box regression loss. We use a batch size of 200 instead of 256 because of GPU RAM constraints of GB.
Multi-Task with Mask:
In order to predict pixel-wise probabilities between a background and foreground (object) class, we require an auto-encoder network that can preserve spatial information. We experiment with two architectures (DeconvNet)  . We chose the DeconvNet architecture for its superior performance, which we attribute to its far greater representation power than DCGAN (the DeconvNet architecture utilizes 15 convolutions instead of the much shallower 5 convolution architecture of the DCGAN generator/discriminator, versus 13 conv. layers in VGG) . As our loss function, we use a weighted sum of cross-entropy losses over classes and times the cross entropy loss over masks . We use a batch size of 128 instead of 256 because of GPU RAM constraints of GB.
We found that the models could not converge when the suppression loss (computed as the Frobenius norm of the masked activations) is multiplied only by (1/32), as the authors utilize in their work. We found that the model could learn if the suppression loss was multiplied by (1/320) or (1/3200) with ADAM, a learning rate of , and a batch size of 256. We use a black and white (BW) mask for .
Information Dropout 
As we note in the main paper, we found a VGG-16 network with two Information Dropout layers, each succeeding one of the first two fully connected layers, could only converge with a sigmoid nonlinearity in the fc layers. We keep the ReLU nonlinearity in the convolutional layers. We train with a batch size of 128, set, set , sample from a log-normal distribution (by exponentiating samples from a normal distribution), and employ an improper log-uniform distribution as our prior, as the authors used for their CIFAR experiments.
We compare the use of a VGG-16 or ResNet-50 architecture, with a batch size of 256 and , which we tuned manually by cross-validation. For the ResNet-50 architecture, we start the learning rate schedule at . We share the convolutional layer parameters across both parameters, and thus find far superior performance when is provided as an RGB mask, rather than a black and white (BW) mask, because the privileged information is more closely aligned with the input .
Modality Hallucination :
Due to the memory requirements of 3 VGG towers with independent parameters, we chose to share the feature representation in the convolutional layers and to incorporate the hallucination loss between the fc1 activations of the depth and hallucination networks. We use a batch size of 128. For identical reasons as those stated in the previous paragraph, RGB masks are a superior representation for than BW masks for this model.