Deep Learning under Privileged Information Using Heteroscedastic Dropout

05/29/2018 ∙ by John Lambert, et al. ∙ 0

Unlike machines, humans learn through rapid, abstract model-building. The role of a teacher is not simply to hammer home right or wrong answers, but rather to provide intuitive comments, comparisons, and explanations to a pupil. This is what the Learning Under Privileged Information (LUPI) paradigm endeavors to model by utilizing extra knowledge only available during training. We propose a new LUPI algorithm specifically designed for Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). We propose to use a heteroscedastic dropout (i.e. dropout with a varying variance) and make the variance of the dropout a function of privileged information. Intuitively, this corresponds to using the privileged information to control the uncertainty of the model output. We perform experiments using CNNs and RNNs for the tasks of image classification and machine translation. Our method significantly increases the sample efficiency during learning, resulting in higher accuracy with a large margin when the number of training examples is limited. We also theoretically justify the gains in sample efficiency by providing a generalization error bound decreasing with O(1/n), where n is the number of training examples, in an oracle case.



There are no comments yet.


page 1

page 5

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

“Better than a thousand days of diligent study is one day with a great teacher.”

— Japanese Proverb

It is a common belief that human students require far fewer training examples than any learning machine [42]. No doubt this has to do with the fact that effective teachers provide much more than the correct answer to their pupils; they provide an explanation in addition to the result.

In a typical machine learning setup, we present tuples

to a machine learning model. One way to introduce an “explanation” to a supervised learning system would be to provide some sort of privileged information, which we entitle

. In practice, one can incorporate the triplets into a learning system at training time and in the continue to make use of only in the testing stage, without any access to . In other words, the “Student” has access to privileged information while interacting with the “Teacher” during training, but in the test stage the “Student” operates without the supervision of the “Teacher”. This paradigm is called Learning Under Privileged Information (LUPI) and was introduced by Vapnik and Vashist [42].

Vapnik and Vashist [42]

provide a LUPI algorithm for Support Vector Machines (SVMs). From an algorithmic perspective, the privileged information is utilized to estimate slack values of the SVM constraints. From a theoretical perspective, this algorithm accelerates the rate at which the upper bound on error drops from

to a far steeper curve of , where is the number of required samples.

Figure 1: In the Learning Under Privileged Information (LUPI) paradigm, a teacher provides additional information during training. In this work, we propose to utilize this information in order to control the variance of the Dropout. Since the Dropout’s variance is not constant, we call this a Heteroscedastic Dropout. Our empirical and theoretical analysis suggests that Heteroscedastic Dropout significantly increases the sample efficiency of both CNNs and RNNs, resulting in higher accuracy with much less data.

Privileged information is ubiquitous: it usually exists for almost any machine learning problem. However, we do not see wide adoption of such methods in practice. The major obstacle is the fact that the original LUPI framework proposed in [42]

is only valid for SVM-based methods. Indeed, many have shown that the privileged information can be introduced into the loss function under a multi-task or a distillation loss in an algorithm-agnostic way. However, we raise the question, could it and

should it be fed in as an input instead of an additional task? If so, how would we go about doing so in an algorithm-agnostic way?

We define a new class of LUPI algorithms by making a structural specification. We consider a hypothesis class such that each hypothesis is a combination of two functions – namely, a deterministic function taking as an input, and a stochastic function taking as an input. When is not available in the test stage, the “Student” simply makes a Bayes optimal decision and marginalizes the model over . Our structural specification makes this marginalization straightforward while not compromising the expressiveness of the model. This structure is natural in the context of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) thanks to dropout. Dropout is a widely adopted tool to regularize neural networks by multiplying the activations of a neural network at some layer with a random vector. We simply extend the dropout to heteroscedastic dropout by making its variance a function of the privileged information. In other words, dropout becomes the stochastic function taking as an input and marginalizing the function corresponds to not utilizing dropout in the test phase. In order to be able to train the heteroscedastic dropout, we use Gaussian dropout instead of Bernoulli because the key technical tool we use is the re-parameterization trick [22] which is only available for some specific distributions, including the Gaussian.

The rationale behind heteroscedastic dropout follows the close relationship between Bayesian learning and dropout presented by Gal and Gharamani[13]. Dropout can be considered a tool to approximate the uncertainty of the output of a neural network. In our proposed heteroscedastic dropout, the privileged information is used to estimate this uncertainty so that hard examples and easy examples are treated accordingly during training. Our theoretical study suggests that the accurate computation of a model’s uncertainty can accelerate the rate at which a CNN’s upper bound on error drops, from the typical rate of to a faster , where is the number of training examples. In an oracle case for a dataset with training examples, this theoretical upper bound would allow us to learn a model with identical generalization error with samples instead of K and is thus hugely significant. Although the practical gain we observe is nowhere close, it is still very significant.

We evaluate our method in experiments with both CNNs and RNNs, and show a significant accuracy improvement over two canonical problems, image classification and multi-modal machine translation. As privileged information, we offer a bounding box for image classification and an image of the scene described in a sentence for machine translation. Our method is problem- and modality-agnostic and can be incorporated as long as dropout can be utilized in the original problem and the privileged information can be encoded with an appropriate neural network.

2 Related Work

The key aspects that differentiate our work from the literature are: our method is applicable to any deep learning architecture which can utilize dropout, we do not use a multi-task or distillation loss, we provide theoretical justification suggesting higher sample efficiency, we perform experiments for both CNNs and RNNs. A thorough review of the related literature is provided below.

Learning Under Privileged Information: Learning under Privileged Information (LUPI) is initially proposed by Vapnik and Vashist [42, 41]

. It extends the Support Vector Machine (SVM) by empirically estimating the slack values via privileged information. This method is further applied to various computer vision problems

[33, 31, 11] as well as ranking [32], clustering [10] and metric learning [12] problems. These method are based on max-margin learning and are not applicable to CNNs or RNNs.

One closely related work is [16], extending Gaussian processes to the LUPI paradigm. Hernández-Lobato et al. [16] use privileged information to estimate the variance of the noise in their model. Similarly, we use the privileged information to control the variance of the dropout in CNN and RNN models. However, their method only applies to Gaussian processes, whereas we target neural networks.

Learning CNNs Under Privileged Information: The LUPI paradigm has also been studied recently in the context of CNNs. In contrast to max-margin methods, the literature on learning CNNs under privileged information heavily uses the distillation framework, following the close relationship between distillation and LUPI studied in [26].

Hoffman et al. demonstrated a multi-modal distillation approach to incorporating an additional modality as side information [18]. They start with a pre-trained network and distill the information from the privileged network to a main neural network in an end-to-end fashion.

Multi-task learning is a straightforward approach to incorporate privileged information. However, it does not necessarily satisfy a no-harm guarantee (i.e. privileged information can harm the learning). More importantly, the no-harm guarantee will very likely be violated since estimating the privileged information (i.e. solving the additional task) might be even more challenging than the original problem.

When the privileged information is binary and shares the same spatial structure as the original data, such as is the case with segmentation occupancy or bounding box information, it can also directly be incorporated into the training of CNNs by masking the activations. Group Orthogonal neural networks [45] follow this approach. However, this approach is limited to very specific class of problems.

The loss value of a CNN can be viewed as analogous to the SVM slack variables. Following this analogy, Yang et al. [44] use two networks: one for the original task, and one for estimating the loss using the privileged information. Learning occurs through parameter sharing between them.

Our method is different from aforementioned works since we do not use either a distillation or a multi-task loss.

Learning Language under Privileged Visual Information: Using images as privileged information to learn language is not new. Chrupala et al.[5] used a multi-task loss while learning word embeddings under privileged visual information. The embeddings are trained for the task of predicting the next word, as well the representation of the image. Analysis of this model [5, 19] suggests that the embeddings learned by using vision as a privileged information are significantly different than language only ones and correlate better with human judgments. Recently, Elliott et al. [8]

collected a dataset of images with English captions as well as German translations of captions. Using this dataset, a neural machine translation

under privileged information model is developed following the multi-task setup [9].

Dropout and its Variants: Dropout is a well studied regularization technique for training deep networks. To-the-best of our knowledge, we are the first to specifically utilize privileged information to control the variance of a dropout function. Here, we summarize the existing methods which control the variance of the dropout using variational inference or information theoretical tools. Although these tools have never been applied to the LUPI paradigm, we utilize some of the technical tools developed in these works.

We use multiplicative Gaussian dropout instead of Bernoulli dropout. Gaussian dropout is first introduced in [35]. Its variational extension [23] uses local re-parameterization to perform Bayesian learning.

The Information Bottleneck (IB) [36] is a powerful framework which can enforce various structural assumptions. The IB framework has been applied to CNNs and RNNs using stochastic gradient variational Bayes and the re-parametrization trick [22]. Perhaps closest to our method, Achille and Soatto [1] use the information bottleneck principle to learn disentangled representations when a CNN with Gaussian Dropout is used. The authors introduce many ideas upon which we build; specifically, our hypothesis class (Eqn. 4) is very similar to the architecture they propose. The main architectural difference is their choice to define the variance as a function of , whereas we make it a function of . We also use similar distributional priors and a similar training procedure. On the other hand, we apply these ideas to a completely different problem with a different theoretical analysis. The information bottleneck has been applied to LUPI for SVMs [27]. However, this method does not apply to neural networks.

Although we use IB [37], Gaussian dropout [35] and the re-parametrization trick [22], we are the first to our knowledge to apply any of these methods to the LUPI problem.

3 Method

Consider a machine learning problem defined over a compact space and a label space . We also consider a loss function which compares a prediction with a ground truth label. In learning under privileged information, we also have additional information for each data point defined over a space , which is only available during the training. In other words, we have access to i.i.d. samples from the data distribution as during training. However, in test we will only be given . Formally, given a function class parameterized by and data , a typical aim is to solve the following optimization problem;


We propose to do so by learning a multi-view model using both and and to use the marginalized model in test when is not available. Consider a parametric function class for the multi-view data . The training problem becomes:


This is equivalent to a classical supervised learning problem defined over a space and any existing method like CNNs can be used. In order to solve the inference problem, we consider the following marginalization


The major problem in this formulation is the intractability of this expectation, as is unknown. We propose to restrict the class of functions in a way that the expectation is straightforward to compute. The form we propose is a parametric family such that the privileged information controls the variance, whereas the main information (i.e. information available in both training and test) controls the mean. The specific form we use is:


where represents the Hadamard product and the stochastic function

is a normal random variable with a constant mean function and a covariance function parameterized by

and . We also decompose as two disjoint vectors as . Moreover, in this formulation, the expectation defined in (3) becomes straightforward and can be shown to be . We visualize this structural specification in Figure 2.

Figure 2: The structure we propose. Privileged information is only used for estimation of the variance of the heteroscedastic dropout.

We use neural networks to represent and and learn their parameters using the information bottleneck. Since the output space is discrete (we address classification), we denote the representation of the data as and compute the output as . We explain the details of training in the following sections.

3.1 Information Bottleneck for Learning

We need to control the role of in LUPI. The information bottleneck has already been used for this purpose [27]; however, we do not need this explicit specification because our structural specification directly controls the role of . We use the information bottleneck for a rather different reason, its original reason, learning a minimal and sufficient joint representation of which captures all the information about . This is similar to [1], and we use the same log-Normal assumption. The Lagrangian of the information bottleneck can be written as (see [37] for details);


where is the joint representation of computed as . These terms can be computed as;


where represents the distributions computed over our model with parameters . In order to compute the KL divergence, we need an assumption about the prior over representations . As suggested by [1]

, the log-Normal distribution follows the empirical distribution

when ReLu is used. Hence, we use the log-Normal distribution and compute the KL divergence as

(see appendix for full derivation);


Combining them, the final optimization problem is;


This minimization is simply the cross-entropy loss with regularization over the logarithm of the computed variances of the heteroscedastic dropout, and can be performed via the re-parametrization trick in practice when and are defined as neural networks. We further justify the choice of IB regularization via experimental observation: without it, optimization leads to NaN loss values. We discuss the details of the re-parametrization trick in the following sections.

3.2 Implementation

In this section, we discuss the practical implementation details of our framework, specifically pertaining to image classification with CNNs and machine translation with RNNs. For the classification setup, we use the image as , object localization information as , and image label as . For the translation setup, we use the sentence in the source language as , an image which is the realization of the sentence as , and the sentence in the target language as .

We make a sequence of architectural decisions in order to design and . For the classification problem, we design both of them as CNNs and share the convolutional layers. The inputs are , an image, and , an image with a blacked-out background. We use the VGG-Network [34] as an architecture and simply replace each dropout with our form of heteroscedastic dropout. We show the details of the architecture with the re-parameterization trick in Figure 4

. We also normalize images with the ImageNet pixel mean and variance. As data augmentation, we horizontally flip images from left to right and make random crops.

We use a two-layered LSTM architecture with 500 units as our RNN cell and use heteroscedastic dropout between layers of LSTMs. The main reason behind this choice is the fact that dropout in general has only been shown to be useful for connections between LSTM layers. We use attention [2] and feed the image as a feature vector computed using the VGG[34] architecture pre-trained on ImageNet. We give the details of the LSTM with re-parametrization trick in Figure 3. For the inference, we use beam search over 12 hypotheses. Our LSTM implementation directly follows the baseline implementation provided by OpenNMT [24].

Figure 3: Multi-Modal Machine Translation We show the LSTM architecture we use, which incorporates the re-parameterization trick and heteroscedastic dropout connections. We use dropout only between layers and share among cells following [14]. We do not use any dropout in inference since the image is not available during test.
Figure 4: Image Classification We show the CNN architecture we used in our experiments, along with the re-parameterization trick and heteroscedastic dropout connections. We do not use any dropout in inference since localization bounding boxes are not available during test.

Hyperparameter Settings We use a standard learning rate across all image classification experiments, setting our initial learning rate to

, and tolerating 5 epochs of non-increasing validation set accuracy before decaying the learning rate by

x. For multi-modal machine translation, we use an initial learning rate of and halve the learning rate every epoch after the 8 epoch. We use the ADAM [21]

optimizer in PyTorch for both image classification and multi-modal machine translation. All CNN weights are initialized according to the method of He

et al. [15] and a decay of was used for image classification. For multi-modal machine translation, we do not use any weight decay and initialize weights according to [24].

4 Experimental Results

In order to evaluate our method, we perform various experiments using both CNNs and LSTMs. We test our method with CNNs for the task of image classification and with LSTMs for the task of machine translation. In the rest of this section, we discuss the baselines against which we compare our algorithm and the datasets we use.

Datasets: We perform our experiments using the following datasets; ImageNet [6]: A dataset of 1.3 million labelled images, spanning 1000 categories. We only use the subset of 600 thousand images which include localization information. Multi-30K[8]: A dataset of 30 thousand Flickr images which are captioned in both English and German. We use this dataset for multi-modal machine translation experiments. In Multi-30K, whereas the English captions are directly annotated for images, the German captions are only translations of the English captions. Hence, during the ground truth translation, the images were privileged information never seen by the translators. This property makes this dataset a perfect benchmark for LUPI.

Baselines: We compare our method against the following baselines. No-: a baseline model not using any privileged information. Gaussian Dropout [35]: A multiplicative Gaussian dropout with a fixed variance. Multi-Task: We perform multi-task learning as a tool to utilize privileged information. We compare both regression to bounding box coordinates and denote it as Multi-Task w/ B.Box, as well as direct estimation of the RGB mask and denote it as Multi-Task w/ Mask. We use this self-baseline only for CNNs since there are many published multi-task methods for machine translation with multi-modal information and we compare with them all. In addition to these self-baselines, we also compare with the following published work: GoCNN[45]: a method for CNNs with segmentation as privileged information which proposes to mask convolutional weights with segmentation masks. Information Dropout[1]: a regularization method that utilizes injection of multiplicative noise in the activations of a deep neural network (but as a function of the input , not ). MIML-FCN[44]: a CNN-based LUPI framework designed for multi-instance problems. Our problem is not multiple instance; however, we still make a comparison for the sake of completeness. Modality Hallucination[18]: Distillation-based LUPI method designed for multi-modal CNNs. Imagination [9]: Distillation-based LUPI method designed for multi-modal machine translation (see appendix for implementation details).

4.1 Effectiveness of Our Method

We compare our method with the No- baseline for image classification using the ImageNet dataset. We perform experiments by varying the number of training examples logarithmically. This is key since the main motivation behind our LUPI method is learning with less data rather than having higher accuracy. We report several results in Table 1 and visualize additional data points in Figure  5.

Number of Training Images
Model 25K 75K 200K 600K
Single Crop top-1
No- - 37.85
Our LUPI -
Single Crop top-5
No- -
Our LUPI -
Multi-Crop top-1
No- - 39.99
Our LUPI -
Multi-Crop top-5
No- - 64.49 81.0
Our LUPI -
Table 1: Classification Test Accuracy on 1000 ILSVRC Classes. Because the ILSVRC server prohibits large numbers of test submissions, which we required to evaluate at different sizes of sample data, we use a hold-out set of images from ImageNet CLS-LOC as our test set. The authors of [34] report a Multi-Crop, top-5 error rate when training on M images. Where we report “No-,” we describe the results of a classical CNN learning method. All 1-crop evaluations below were carried out with a center crop. All K models diverged.
Figure 5:

Accuracy vs. training set size for ImageNet classification. Each data point denotes a VGG-16 network trained with batch normalization. The accuracies of models trained

with are depicted in blue; those trained without are depicted in green (via an adaptive learning rate decay schedule) and red (via a fixed learning rate decay schedule). Adaptively modifying the learning rate according to performance on a hold-out set yields massive gains in low- and mid-scale data regimes when compared with decaying the learning rate at fixed intervals, e.g. every 30 training epochs.

Our method is quite effective for a training dataset size of images; however, it has no positive impact for the and cases. Even more importantly, the smaller the training set, the larger the improvement. For images, it results in single crop top-1 accuracy improvement; whereas, for , it matches the performance. This simply suggests that our algorithm is particularly effective for low- and mid-scale datasets. This result is quite intuitive since with increasing dataset size, all algorithms can effectively learn and reach an optimal accuracy which is possible under the model class. Hence, the role of an “intelligent teacher” is to provide privileged information to learn with less data. In other words, LUPI is not a way to gain extra accuracy regardless of the dataset; rather, it is a way to significantly increase the data efficiency. We do not perform a similar experiment for machine translation since the available dataset is a mid-scale and our LUPI method demonstrates asymptotic accuracy increases at full dataset.

4.2 Data Efficiency of Our Method and Baselines

In order to compare the data efficiency gain of our method against baselines, we perform image classification and multi-modal machine translation experiments. We use ImageNet images since our main goal is identify insights regarding data sample efficiency gains and using a smaller training set makes this analysis possible. We summarize the image classification experiments in Table 4 and multi-modal machine translation experiments in Table 3. Our method outperforms all baselines for both tasks, for image classification with a significant margin.

Single Crop Multi-Crop
Model top-1 top-5 top-1 top-5
No- [34] 37.85 62.76 39.99 64.49
MIML-FCN [44]/ResNet 35.61 59.66 38.3 62.3
Modal. Hallucination [18] 37.66 63.15 40.45 65.95
Info. Dropout [1] 38.09 63.52 41.84 67.47
Gaussian Dropout [35] 38.80 63.64 41.0 65.3
MIML-FCN [44]/VGG 39.54 64.43 42.0 66.4
Multi-Task w/ Bbox 39.96 64.79 42.4 66.6
Multi-Task w/ Mask [28] 40.48 65.62 43.18 67.68
GoCNN 41.43 66.78 44.5
Table 2: We compare our method’s performance with several baselines. We train with 75 Images per each of the 1000 ImageNet classes, leaving us with 75 images in total. We outperform each model and are competitive with GoCNN, a model specifically designed for the problem of learning with segmentation data using various architectural decisions. Evaluation is carried out on the held-out set of images from our holdout test set.
ende deen
Model BLEU Meteor BLEU Meteor
No (following [24])
Toyama et al. [38]
Hitschler et al. [17]
Calixto et al. [3]
Calixto et al. [4]
Imagination [9]
Table 3: We compare our method for multi-modal machine translation with several baselines. We report BLEU[29, 25] and METEOR[7] metrics. Some baselines only report English(en)German(de) results, and exclude deen.

Image Classification with Privileged Localization

One interesting result is that our network is clearly learning much more than to predict a random constant for its output , the covariance matrix used for reparameterization; in fact, our network outperforms the network that produces a whose entries are drawn from pure Gaussian noise by . We analyze our method for CNNs both theoretically and qualitatively in Section 5 and conclude that our method learns to control the uncertainty of the model and results in an order of magnitude higher data efficiency, explaining this large margin.

Furthermore, GoCNN [45], an architecture specifically designed for the problem of learning with segmentation data using various architectural decisions, results in a significant accuracy improvement competitive with our method in a small dataset regime. However, GoCNN’s performance relative to other baselines begins to degrade at a dataset size of 200K images, leading to a top-1 accuracy decrease of in comparison with Bernoulli dropout and with our heteroscedastic dropout method. This is an intuitive result because GoCNN’s rigid architectural decisions inject significant bias into the model.

Because Information Dropout relies upon sampling from a log-normal distribution with varying variance, it is heteroscedastic. However, compounded Information Dropout layers which exponentiate samples from a normal distribution lead to unbounded activations; thus, a suitable squashing function like the sigmoid must be employed to bound the activations. We find its performance can actually decrease accuracy when compared with a ReLU nonlinearity.

Multi-modal Machine Translation Our method results in a significant accuracy improvement measure by both BLEU and METEOR scores. One interesting observation is that our method outperforms various multi-modal methods which use image information in both training and test. This counterintuitive result is due to the way in which the dataset is collected. The Multi-30k[8] dataset is collected by simply translating the English captions of images into German. A LUPI model [9] was already shown to perform better than multi-modal translation models which can use both images and sentences at test time [3, 4, 17, 38]. This surprising result is largely due to the fact that the translators did not see the images while providing ground truth translations. More importantly, the effectiveness of visual information in machine translation in a privileged setting is also intuitive following the results of [5]. Chrupala et al.[5]

show that when image information is used as privileged information in the learning of word representations, the quality of such representations increases. Hence, a multi-modal paradigm for learning language (e.g. with privileged visual information) and vice versa is a fruitful direction for both natural language processing and computer vision communities and our method performs quite effectively on this task.

In summary, our results overperform all baselines for both multi-modal machine translation and image classification experiments using both CNNs and RNNs. These results suggest that our method is effective and generic.

4.3 Learning under Partial Privileged Information

Although privileged information naturally exists for many problems, it is typically not available for all points. Thus, it is common to encounter a scenario in which the entire training data is labelled; however, only a small portion includes privileged information. In other words, we typically have a dataset which is the union of and where . In order to experiment with this setting, we vary the amount of available. We present the result in Figure 6 for machine translation and in the appendix for image classification.

The results in Figure 6 suggest that even when only a small portion ( for machine translation, for image classification) of the data has privileged information, our method is effective resulting in a significant accuracy increase very similar to the one we obtained with % of privileged information.

Figure 6: Accuracy vs. for multi-modal machine translation. An identical experiment on image classification is shown in appendix.

5 Analysis of the Algorithm

Our empirical analysis suggests a strong data-efficiency increase when privileged information is incorporated using our method. It is interesting to quantify this increase in terms of the theoretical learning rate. For the case of SVMs, Vapnik et al. [42] showed that utilizing privileged information can result in a generalization error bound with rate instead of where is the dataset size. Our experimental results suggests a similar story for CNNs, but the theoretical justification can not be extended from [42] since their analysis is specific to SVMs. In this section, we endeavor to answer this question for our algorithm. We show that our method is capable of converting an error rate (derived in Proposition 1) into in an oracle setting for CNNs. We rigorously prove that it is possible to reach an rate using our structural assumptions; however, we do not provide any argument for the optimization landscape. In other words, our results are only valid with an oracle optimizer which can find the solution satisfying our assumptions. The study of the loss function and the optimization remains an open problem; however, we provide strong empirical evidence that using SGD with information bottleneck regularization enables faster learning.

We start by presenting a bound over the generalization error of CNNs with no privileged information. This result directly follows from [43], and we include it here for the sake of completeness. Loss functions of CNNs based on

distance are Lipschitz continuous when the non-linearity is the rectified linear unit, the pooling operation is max-pooling and the softmax function is used to convert activations into logits. Moreover, any learning algorithm with a Lipschitz loss function admits the following result


Proposition ([43, Example 4])

Given i.i.d. samples drawn from as , if a loss function is -Lipschitz continuous function of for all , bounded by and has a covering number

, then with probability at least


This proposition simply details the baseline error rate under no privileged information. In order to intuitively explain how our algorithm can accelerate this learning to , consider the following oracle algorithm. Using the privileged information, one can estimate the uncertainty (variance) of the neural network and can use the inverse of this estimate as a the variance of the heteroscedastic dropout. Since the heteroscedastic dropout is multiplicative, this results in unit variance regardless of the input. In a similar fashion, this oracle algorithm can bound the variance with an arbitrary constant. Following this oracle algorithm, we show that when the variance is properly controlled, our method can reach an rate. Consider the population distribution of number images per class versus the empirical distribution as . The value is purely a property of the way in which a was dataset collected and must be treated independently of the learning. Hence, we do not study the rate at it vanishes. We present the following proposition and defer its proof to the appendix:


Given i.i.d. samples drawn from as and a loss function defined as where is a CNN, assume that any path between input and output has maximum weight , the total number of paths between input and output is , and for all training points , and . With probability at least ,

where .

This proposition means that learning with sample efficiency is indeed possible as long as one can bound the variance of the output() with an arbitrary number . Hence, full control of the output variance makes learning with higher sample efficiency possible. A question remains: is it possible to learn this oracle solution by using SGD with information bottleneck regularization? Unfortunately, we have no theoretical answer for this question and leave it as an open problem. However, we study this problem empirically and show that there is a strong empirical evidence suggesting that the answer is affirmative.

Figure 7:

For 8000 random samples from the validation set that our heteroscedastic dropout algorithm mis-classifies, as well as 8000 random samples it correctly classifies, we plot the average of activations per dimension (we sort the 4096 dimensions in terms of average energy over the full dataset for clarity).

A realistic estimate of variance is typically not possible without a strong parametric assumption; however, we can use the simple heuristic that samples from the validation set that our algorithm mis-classifies should have higher variance than the samples which are correctly classified. We plot the average energy of computed dropout variances per fully connected neuron for mis-classified and correctly classified examples in Figure 

7. Interestingly, our method consistently assigns larger multiplier (dropout) values for correctly classified samples and significantly smaller values for mis-classified samples. This strongly supports our hypothesis since when the low heteroscedastic dropout is multiplied with the high-variance mis-classified examples, their final variance will be low, possibly bounded by the .

6 Conclusion

We described a learning under privileged information framework for CNNs and RNNs. We proposed a heteroscedastic dropout formulation by making the variance of the dropout a function of privileged information.

Our experiments on image classification and machine translation suggest that our method significantly increases the sample efficiency of both CNNs and LSTMs. We further provide an upper bound over the generalization error of CNNs suggesting a sample efficient learning (with rate ) in the oracle case when privileged information is available. We make our learned models as well as the source code available 111

7 Acknowledgements

We thank Alessandro Achille for his help on comparison with information dropout. We acknowledge the support of Toyota (1191689-1-UDAWF), MURI (1186514-1-TBCJE); Panasonic (1192707-1-GWMSX).


Appendix A Proof of Proposition 1

The proof of proposition 1 is available at [43] as Example 4. However, we include here a simpler proof for the sake of completeness.


We will start with

In , we use the fact that the space has an -cover; and denote the cover as such that each has diameter at most . We further define an auxiliary variable and and used the triangle inequality. In , we use to represent . Finally, in we use the fact that each ball has diameter at most and the loss function is -Lipschitz.

We can bound with a maximum loss and use the Breteganolle-Huber-Carol inequality (cf Proposition A6.6 of [40]) in order to bound .

Combining all, we observe that with probability at least ,

Appendix B Proof of Proposition 2

The proof of Proposition 2 will closely follow the proof of Proposition 4 and Lemma 5 in [20]. Our main technical tool will be controlling the variance in Bernstein-type bounds to obtain an upper bound which has rate . Consider the output of a CNN, given an image as , with abuse of notation (we used to represent the representation layer, however for the sake of consistency with [20] we denote as the output here). Every activation in the neuron can be written as a sum over the paths between the input layer and the representation as , where is the input neuron connected to the path and is the weight of the path. One interesting property is the fact that this weight is simply the multiplication of all weights over the path with a binary value. When only max-pooling and ReLU non-linearities are used, that binary value is 1 if all activations are on and 0 if at least one of them is off. This is due to the fact that max-pooling and ReLU either multiply the input with a value of or

. We call this binary variable

. Hence, each entry is;


We can note as a vector with dimension equal to the number of paths. Next, we can explicitly compute the generalization bound over loss as;


We will separately bound each term in the following sub-sections. We first need to prove a useful lemma we will use in the following proofs.


Matrix Bernstein inequality with variance control (corollary to Theorem 1.4 in [39]). Consider a finite sequence

of independent, self-adjoint matrices with dimension d. Assume that each random matrix satisfies

and almost surely. Let . Then, for any , if ; with probability at least ,


Theorem 1.4 by Tropp [39] states that for all ,

By using the assumption that ,

and substituting , the result is implied.

Bounding term:

This will follow directly from the Matrix form of the Bernstein inequality, which is stated as Lemma 1. By using and as defined in the main text, Lemma 1 shows that with probability at least ,


By using the definition of the Matrix norm and the Cauchy-Schwarz Inequality, one can show that;


Bounding term:

We need to bound . In order to bound this term, we will first use the fact that is a 1-hop vector and the fact that if , and otherwise. Hence,

where . Using this fact, we can state;


In (a), we noted the training examples from class as . We can use the Bernstein inequality, which states that


where . Using the assumption that , we can state that with probability at least ,


Here is a term which bounds the variance of the class label distribution, which is defined in the next section.

Bounding term:

This term is both independent of the learning algorithm and the weights learned and can be simply made to vanish to zero if the number of samples per class directly follows the population densities. Hence, we do not include a specific rate for this quantity and simply denote it with and assume that it goes to with a rate better or equivalent to a linear rate. See the main text for a detailed explanation as to why we choose to not include in the analysis.

After bounding each term in (2), we can now state the proof for Proposition 2.


By using the decomposition in (2) and the bounds in (4,7), we can state that

Appendix C Derivation of (7, Main Paper)

In equation (7) of the main paper, we stated that


In this section, we formally derive this claim using the log-Uniform assumption. In order to compute , we need to choose a prior distribution for . As discussed in depth in [1]

, the use of ReLU activations empirically suggests that a good choice for this prior would be the log-uniform distribution. Hence, we consider the log-Uniform prior. We first use the definition of the KL-divergence as;


Since we know the distribution of as , and using the assumption that the covariance matrix is diagonal,


If we use the log-uniform prior, the first term in the KL-divergence can be computed as;


where we use the fact that the logarithm of the pdf of a log-uniform distribution is with appropriate constants. Furthermore, the norm of does not affect the output as it is followed with a soft-max operation, which is invariant up-to a scalar multiplication. Hence, we can safely consider its norm to be a constant . Using both terms,


with appropriate constants and . We do not include in the optimization since an additional constant does not change the result of the optimization and we include in the trade-of parameter .

Appendix D Additional Results

In this section, we provide two experimental results missing in the paper: first, an analysis of accuracy vs. the amount of provided for image classification and second, a comparison of our method with baselines for the task of ImageNet image classification with K images. We also provide further qualitative analysis of the relationship between variance control and our method.

Accuracy vs. Partial for Image Classification:

In the main text, we already studied the case where only a partial is available and showed that even a small percentage of is enough for multi-modal machine translation experiments. Due to the limited space, we provide the same experiment for the image classification here in Figure 8 and show that as long as a small percentage of the dataset has privileged information, our algorithm is effective.

Figure 8: Accuracy vs amount of privileged information () available for image classification using K ImageNet images. We plot top-1, single crop accuracy.

ImageNet with 200K Images

In the main paper, we performed the ImageNet image classification experiment with only 75K training images (a mid-sized dataset) and showed that our method learns significantly faster (by reaching higher accuracy) than all baselines. One might ask, would the result still hold if we had up to 200K images (a larger-sized dataset)? In order to answer this question, we carry out further experiments and show the results in Table 4; the results suggest that our method matches the performance of the best baselines in the 200K case. Hence, we can conclude that our method, i.e. marginalization, provides no harm in a large-scale dataset regime.

Single Crop Multi-Crop
Model top-1 top-5 top-1 top-5
No- [34] 55.99 79.21 58.60 80.98
GoCNN [45] 50.73 75.39 53.37 77.61
Modal. Hallucination [18] 52.28 76.33 55.66 78.79
Info. Dropout [1] 54.60 77.89 58.47 81.25
Gaussian Dropout [35] 55.48 78.87 58.11 80.58
MIML-FCN [44]/ResNet-50 56.00 78.83 59.14 81.05
Multi-Task w/ Mask [28] 56.22 79.56 59.39
MIML-FCN [44]/VGG 56.23 79.51 58.85 81.23
Multi-Task w/ Bbox 81.48
Table 4: We compare our method’s performance with several baselines. We train with 200 images per each of the 1000 ImageNet classes, leaving us with 200 images in total. Since we utilize only 600K (or less) of the 1.28M images from the CLS-LOC ImageNet dataset across all of our experiments, we use a randomly selected subset of the remaining 628K images as a hold-out set for evaluation. Accuracy is given in %, from 0 to 100. Multi-crop accuracy is computed not via individual voting on the correct class by each crop, but rather by taking an arg max over classes after summing the softmax score vectors of each individual crop.

Appendix E Additional Qualitative Analysis of the Method

In Figure (9), we visualize the computed variance of the heteroscedastic dropout.

Figure 9: Visualization of the computed variance of our heteroscedastic dropout for 8000 random samples from the validation set that our algorithm mis-classifies, as well as 8000 random samples it correctly classifies. The plot is a heatmap of activations, with dimensions (num_imagesnum_channels).

Figure (9) supports our hypothesis that our algorithm controls the variance since mis-classified examples are expected to have high variance/uncertainty and need to be multiplied with a low value to be controlled. The visualization is fairly uniform, especially for misclassified examples, but we believe the has interesting information in it which can be further utilized in applications like confidence estimation and is an interesting future work direction.

Appendix F Additional Implementation Details

In this section, we give all of the implementation details of our algorithm, as well as the implementation details of the baselines we used in our experimental study. In order to ensure full reproducibility of all experiments, we share our source code 333 We found that in all experiments that could converge, from training set sizes of up to images, an adaptive learning rate decay schedule significantly outperforms the traditional 30-epoch fixed learning rate decay schedule. We consistently observe performance gains of with the learning rate schedule set adaptively according to whether or not performance on the hold-out validation set has reached a plateau. Unless otherwise noted, we utilize SGD with momentum set to 0.9 for all models, and a learning rate schedule that starts at , as [34] suggests.

Heteroscedastic Dropout Implementation:

We set in all experiments, although we found this was not a meaningful hyperparameter. We found the training to be prone to convergence in local optima and restarted training if the distribution over class logits was still uniform after 30 epochs. We use a weight decay of in all experiments, ADAM, and a learning rate of , as described in Section 3.2 of the paper. We cropped images to a standard size of before feeding them into the network.

We scale the batch size with respect to the size of the training set. For example, for the K model, we use a batch size of 64. For the K Model, we use a batchsize of 128. For the K model, we utilize curriculum learning and a batch size of 256. We first train the fc layers in the tower for 8 epochs with ADAM, a batch size of 128, and a learning rate , and then fix the fc weights and fine-tune the fc layers of the tower with the ADAM optimizer and a learning rate of and a batch size of 256.


A baseline model without access to any privileged information. We use a batch size of 256.

Gaussian Dropout [35]:

We draw noise from because the authors of [35] state that should be set to . We did not include a regularization loss on the covariance matrices of the random noise. We use SGD with momentum set to 0.9, a learning rate of , and a batch size of 256.

Multi-Task with Bbox:

We add one extra head to the VGG network that, just as the classification head, accepts pool5 activations. This regression head produces the center coordinates () and width and height of a bounding box, all normalized to . As our loss function, we use a weighted sum of cross entropy loss and times the bounding box regression loss. We use a batch size of 200 instead of 256 because of GPU RAM constraints of GB.

Multi-Task with Mask:

In order to predict pixel-wise probabilities between a background and foreground (object) class, we require an auto-encoder network that can preserve spatial information. We experiment with two architectures (DeconvNet) [30] [28]. We chose the DeconvNet architecture for its superior performance, which we attribute to its far greater representation power than DCGAN (the DeconvNet architecture utilizes 15 convolutions instead of the much shallower 5 convolution architecture of the DCGAN generator/discriminator, versus 13 conv. layers in VGG)[28][30] [34]. As our loss function, we use a weighted sum of cross-entropy losses over classes and times the cross entropy loss over masks . We use a batch size of 128 instead of 256 because of GPU RAM constraints of GB.

GoCNN [45]

We found that the models could not converge when the suppression loss (computed as the Frobenius norm of the masked activations) is multiplied only by (1/32), as the authors utilize in their work. We found that the model could learn if the suppression loss was multiplied by (1/320) or (1/3200) with ADAM, a learning rate of , and a batch size of 256. We use a black and white (BW) mask for .

Information Dropout [1]

As we note in the main paper, we found a VGG-16 network with two Information Dropout layers, each succeeding one of the first two fully connected layers, could only converge with a sigmoid nonlinearity in the fc layers. We keep the ReLU nonlinearity in the convolutional layers. We train with a batch size of 128, set

, set , sample from a log-normal distribution (by exponentiating samples from a normal distribution), and employ an improper log-uniform distribution as our prior, as the authors used for their CIFAR experiments.

Miml-Fcn [44]:

We compare the use of a VGG-16 or ResNet-50 architecture, with a batch size of 256 and , which we tuned manually by cross-validation. For the ResNet-50 architecture, we start the learning rate schedule at . We share the convolutional layer parameters across both parameters, and thus find far superior performance when is provided as an RGB mask, rather than a black and white (BW) mask, because the privileged information is more closely aligned with the input .

Modality Hallucination [18]:

Due to the memory requirements of 3 VGG towers with independent parameters, we chose to share the feature representation in the convolutional layers and to incorporate the hallucination loss between the fc1 activations of the depth and hallucination networks. We use a batch size of 128. For identical reasons as those stated in the previous paragraph, RGB masks are a superior representation for than BW masks for this model.