Unifying Heterogeneous Classifiers with Distillation

by   Jayakorn Vongkulbhisal, et al.

In this paper, we study the problem of unifying knowledge from a set of classifiers with different architectures and target classes into a single classifier, given only a generic set of unlabelled data. We call this problem Unifying Heterogeneous Classifiers (UHC). This problem is motivated by scenarios where data is collected from multiple sources, but the sources cannot share their data, e.g., due to privacy concerns, and only privately trained models can be shared. In addition, each source may not be able to gather data to train all classes due to data availability at each source, and may not be able to train the same classification model due to different computational resources. To tackle this problem, we propose a generalisation of knowledge distillation to merge HCs. We derive a probabilistic relation between the outputs of HCs and the probability over all classes. Based on this relation, we propose two classes of methods based on cross-entropy minimisation and matrix factorisation, which allow us to estimate soft labels over all classes from unlabelled samples and use them in lieu of ground truth labels to train a unified classifier. Our extensive experiments on ImageNet, LSUN, and Places365 datasets show that our approaches significantly outperform a naive extension of distillation and can achieve almost the same accuracy as classifiers that are trained in a centralised, supervised manner.



There are no comments yet.


page 1

page 2

page 3

page 4


Towards Understanding Knowledge Distillation

Knowledge distillation, i.e., one classifier being trained on the output...

On the Unreasonable Effectiveness of Knowledge Distillation: Analysis in the Kernel Regime

Knowledge distillation (KD), i.e. one classifier being trained on the ou...

Large-Scale Generative Data-Free Distillation

Knowledge distillation is one of the most popular and effective techniqu...

Knowledge Distillation: Bad Models Can Be Good Role Models

Large neural networks trained in the overparameterized regime are able t...

CEKD:Cross Ensemble Knowledge Distillation for Augmented Fine-grained Data

Data augmentation has been proved effective in training deep models. Exi...

Training CNN Classifiers for Semantic Segmentation using Partially Annotated Images: with Application on Human Thigh and Calf MRI

Objective: Medical image datasets with pixel-level labels tend to have a...

Generalisation of Cyberbullying Detection

Cyberbullying is a problem in today's ubiquitous online communities. Fil...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Unifying Heterogeneous Classifiers. (a) Common training approaches require transferring data from sources to a central processing node where a classifier is trained. (b) We propose to train a unified classifier from pre-trained classifiers from each source and an unlabelled set of generic data, thereby preserving privacy. The individual pre-trained classifiers may have different sets of target classes, hence the term Heterogeneous Classifiers (HCs).

The success of machine learning in image classification tasks has been largely enabled by the availability of big datasets, such as ImageNet 

[32] and MS-COCO [25]. As the technology becomes more pervasive, data collection is transitioning towards more distributed settings where the data is sourced from multiple entities and then combined to train a classifier in a central node (Fig. 1a). However, in many cases, transfer of data between entities is not possible due to privacy concerns (e.g., private photo albums or medical data) or bandwidth restrictions (e.g., very large datasets), hampering the unification of knowledge from different sources. This has led to multiple works that propose to learn classifiers without directly sharing data, e.g., distributed optimisation [4], consensus-based training [12], and federated learning [20]. However, these approaches generally require models trained by each entity to be the same both in terms of architecture and target classes.

In this paper, we aim to remove these limitations and propose a system for a more general scenario consisting of an ensemble of Heterogeneous Classifiers (HCs), as shown in Fig. 1b. We define a set of HCs as a set of classifiers which may have different architectures and, more importantly, may be trained to classify different sets of target classes. To combine the HCs, each entity only needs to forward their trained classifiers and class names to the central processing node, where all the HCs are unified into a single model that can classify all target classes of all input HCs. We refer to this problem as Unifying Heterogeneous Classifiers (UHC). UHC has practical applications for the cases when it is not possible to enforce every entity to (i) use the same model/architecture; (ii) collect sufficient training data for all classes; or (iii) send the data to the central node, due to computational, data availability, and confidentiality constraints.

To tackle UHC, we propose a generalisation of knowledge distillation [8, 17]. Knowledge distillation was originally proposed to compress multiple complex teacher models into a single simpler student one. However, distillation still assumes that the target classes of all teacher and student models are the same, whereas in this work we relax this limitation. To generalise distillation to UHC, we derive a probabilistic relationship connecting the outputs of HCs with that of the unified classifier. Based on this relationship, we propose two classes of methods, one based on cross-entropy minimisation and the other on matrix factorisation with missing entries, to estimate the probability over all classes of a given sample. After obtaining the probability, we can then use it to train the unified classifier. Our approach only requires unlabelled data to unify HCs, thus no labour is necessary to label any data at the central node. In addition, our approach can be applied to any classifiers which can be trained with soft labels, e.g

., neural networks, boosting classifiers, random forests,


We evaluated our proposed approach extensively on ImageNet, LSUN, and Places365 datasets in a variety of settings and against a natural extension of the standard distillation. Through our experiments we show that our approach outperforms standard distillation and can achieve almost the same accuracy as the classifiers that were trained in a centralised, supervised manner.

2 Related Work

There exists a long history of research that aims to harness the power of multiple classifiers to boost classification result. The most well-known approaches are arguably ensemble methods [19, 23, 30] which combine the output of multiple classifiers to make a classification. Many techniques, such as voting and averaging [23], can merge prediction from trained classifiers, while some train the classifiers jointly as part of the technique, e.g., boosting [13] and random forests [6]. These techniques have been successfully used in many applications, e.g., multi-class classification [15], object detection [34, 27], tracking [1]etc. However, ensemble methods require storing and running all models for prediction, which may lead to scalability issues when complex models, e.g., deep networks, are used. In addition, ensemble methods assume all base classifiers are trained to classify all classes, which is not suitable for the scenarios addressed by UHC.

To the best of our knowledge, the closest class of methods to UHC is knowledge distillation [8, 17]. Distillation approaches operate by passing unlabelled data to a set of pretrained teacher models to obtain soft predictions, which are used to train a student model. Albeit originally conceived for compressing complex models into simpler ones by matching predictions, distillation has been further extended to, for instance, matching intermediate features [31], knowledge transfer between domains [14], combining knowledge using generative adversarial-based loss [35]etc. More related to UHC, Lopes et al[26] propose to distill teacher models trained by different entities using their metadata rather than raw inputs. This allows the student model to be trained without any raw data transfer, thus preserving privacy while also not requiring any data collection from the central processing node. Still, no formulation of distillation can cope with the case where each teacher model has different target classes, which we tackle in this paper. We describe how distillation can be generalised to UHC in the next section.

3 Unifying Heterogeneous Classifiers (UHC)

Figure 2: UHC problem and approach overview. An input image is drawn from an unlabelled set and input to a set of pre-trained classifiers , where each returns soft label over classes in . Here, the classes may be different for each . The goal of UHC is to train a classifier that can classify all target classes in using the prediction of on instead of labelled data. Our approach to UHC involves using to estimate , the soft label of over all classes in , then using and to train .

We define the Unifying Heterogeneous Classifiers (UHC) problem in this paper as follows (see Fig. 2). Let be an unlabelled set of images (“transfer set”) and let be a set of Heterogeneous Classifiers (HCs), where each is trained to predict the probability of an image belonging to class . Given and , the goal of this work is to learn a unified classifier that estimates the probability of an input image belonging to the class where . Note that might be trained to classify different sets of classes, i.e., we may have or even for .

Our approach to tackle UHC involves three steps: (i) passing the image to to obtain , (ii) estimating from , then (iii) using the estimated to train in a supervised manner. We note that it is possible to combine (ii) and (iii) into a single step for neural networks (see Sec. 3.5.1), but this 3-step approach allows it to be applied to other classifiers, e.g., boosting and random forests. To accomplish (ii), we derive probabilistic relationship between each and , which we leverage to estimate via the following two proposed methods: cross-entropy minimisation and matrix factorisation. In the rest of this section, we first review standard distillation, showing why it cannot be applied to UHC. We then describe our approaches to estimate from . We provide a discussion on the computation cost in the supplementary material.

3.1 Review of Distillation

Overview Distillation [8, 17] is a class of algorithms used for compressing multiple trained models into a single unified model using a set of unlabelled data 111Labelled data can also be used in a supervised manner.. Referring to Fig. 2, standard distillation corresponds to the case where , . The unified is trained by minimising the cross-entropy between outputs of and as


Essentially, the outputs of are used as soft labels for the unlabelled in training . For neural networks, class probabilities are usually computed with softmax function:



is the logit for class

, and denotes an adjustable temperature parameter. In [17], it was shown that minimising (1) when is high is similar to minimising the error between the logits of and , thereby relating the cross-entropy minimising to logit matching.

Issues The main issue with standard distillation stems from its inability to cope with the more general case of . Mathematically, Eq. (1) assumes and ’s share the same set of classes. This is not true in our case since each is trained to predict classes in , thus is undefined for 222We define as the set of classes in but outside .. A naive solution to this issue would be to simply set for . However, this could incur serious errors, e.g., one may set of a cat image to zero when does not classify cats, which would be an improper supervision. We show that this approach does not provide good results in the experiments.

It is also worth mentioning that in UHC is different from the Specialised Classifiers (SC) in [17]. While SCs are trained to specialise in classifying a subset of classes, they are also trained with data from other classes which are grouped together into a single dustbin class. This allows SCs to distinguish dustbin from their specialised classes, enabling student model to be trained with (1). Using the previous example, the cat image would be labelled as dustbin class, which is an appropriate supervision for SCs that do not classify cat. However, the presence of a dustbin class imposes a design constraint on the ’s, as well as requiring the data source entities to collect large amounts of generic data to train it. Conversely, we remove these constraints in our formulation, and ’s are trained without a dustbin class. Thus, given data from , will only provide only over classes in , making it difficult to unify with (1).

3.2 Relating outputs of HCs and unified classifier

To overcome the limitation of standard distillation, we need to relate the output of each to the probability over . Since is defined only in the subset , we can consider as the probability of given that cannot be in . This leads to the following derivation:


We can see that is equivalent to normalised by the classes in . In the following sections, we describe two classes of methods that utilise this relationship for estimating from .

3.3 Method 1: Cross-entropy approach

Recall that the goal of (1) is to match to by minimising the cross-entropy between them. Based on the relation in (6), we generalise (1) to tackle UHC by matching to , resulting in:




We can see that the difference between (1) and (7) lies in the normalisation of . Specifically, the cross-entropy of each (i.e., the second summation) is computed between and over the classes in . With this approach, we do not need to arbitrarily define values for whenever , thus not causing spurious supervision. We now outline optimality properties of (7).

Proposition 1 (Sufficient condition for optimality) Suppose there exists a probability over , where , then is a global minimum of (7).

Sketch of proof Consider (Note is a function of whereas is a function of ). achieves its minimum when , with the a value of . Thus, the minimum value of is . This is a lower bound of (7), i.e., . However, we can see that by setting , we achieve equality in the bound, i.e., , and so is a global minimum of (7).

The above result establishes the form of a global minimum of (7), and that minimising (7) may obtain the true underlying probability if it exists. However, there are cases where the global solution may not be unique. A simple example is when there are no shared classes between the HCs, e.g., with . It may be possible to show uniqueness of the global solution in some cases depending on the structure of shared classes between ’s, but we leave this as future work.

Optimisation Minimisation of (7) can be transformed into a geometric program (see supplementary material), which can then be converted to a convex problem and efficiently solved [3]. In short, we define for and replace with . Thus, (7) transforms to


which is convex in since it is a sum of scaled and log-sum-exps of  [5]. We minimise it using gradient descent. Once the optimal is obtained, we transform it to with the softmax function (2).

3.4 Method 2: Matrix factorisation approaches

Our second class of approaches is based on low-rank matrix factorisation with missing entries. Indeed, it is possible to cast UHC as a problem of filling an incomplete matrix of soft labels. During the last decade, low-rank matrix completion and factorisation [10, 11] have been successfully used in various applications, e.g., structure from motion [18] and recommender systems [21]. It has also been used for multilabel classification in a transductive setting [9]. Here, we will describe how we can use matrix factorisation to recover soft labels from .

3.4.1 Matrix factorisation in probability space

Consider a matrix where we set (the element in row and column ) to if and zero otherwise. This matrix is similar to the decision profile matrix in ensemble methods [23], but here we fill in for the classes that ’s cannot predict. To account for these missing predictions, we define as a mask matrix where is 1 if and zero otherwise. Using the relation between and in (6), we can see that

can be factorised into a masked product of vectors as:


where is the Hadamard product. Here, is the vector containing , and each element in contains the normalisation factor for each . In this form, we can estimate the probability vector by solving the following rank-1 matrix completion problem:

subject to (13)

where denotes Frobenius norm, and and denote vectors of zeros and ones of size . Here, the constraints ensure that is a probability vector and that remains nonnegative so that the sign of probability in is not flipped. This formulation can be regarded as a non-negative matrix factorisation problem [24], which we solve using Alternating Least Squares (ALS) [2] where we normalise to sum to in each iteration333We note there are more effective algorithms for matrix factorisation than ALS [7, 29, 11]. Here, we use ALS due to ease of implementation.. Due to gauge freedom [7], this normalisation in does not affect the cost function.

3.4.2 Matrix factorisation in logit space

In Sec. 3.1, we discussed the relationship between minimising cross-entropy and logit matching under distance. In this section, we consider applying matrix factorisation in logit space and show that our formulation is a generalisation of logit matching between and .

Let be the given logit output of class of 444For algorithms besides neural networks, we can obtain logits from probability via ., and be that of to be estimated. Consider a matrix where if and zero otherwise. We can formulate the problem of estimating the vector of logits as :

subject to (16)

where deals with shift in logits555Recall that a shift in logit values has no effect on the probability output, but we need to account for the different shifts from the ’s to cast it as matrix factorisation., and

is a hyperparameter controlling regularisation 

[7]. Here, optimising is akin to optimising the temperature of logits [17] from each source classifier, and we constrained it to be nonnegative to prevent the logit sign flip, which could affect the probability.

Relation to logit matching The optimisation in (15) has three variables. Since is unconstrained, we derive its closed form solution and remove it from the formulation. This transforms (15) into:

subject to (18)

where is the column of ; selects the elements of which are indexed in ; and is the orthogonal projector that removes the mean from the vector . This transformation simplifies (15) to contain only and . We can see that this formulation minimises the distance between logits, but instead of considering all classes in , each term in the summation considers only the classes in . In addition, (17) also includes regularisation and optimises for scaling in . Thus, we can say that (15) is a generalisation of logit matching for UHC.

Optimisation While (17) has fewer parameters than (15), it is more complicated to optimise as the elements in are entangled due to the projector. Instead, we solve (15) using ALS over , , and . Here, there is no constraint on , so we do not normalise it as in Sec. 3.4.1.

Alternative approach: Setting as a constant While setting as a variable allows (15) to handle different scalings of logits, it also introduces cumbersome issues. Specifically, the gauge freedom in may lead to arbitrary scaling in and , i.e., for . Also, while the regularisers help prevent the norms of and to become too large, it is difficult to set a single that works well for all data in . To combat these issues, we propose another formulation of (15) where we fix . With fixed, we do not require to regularise since its scale is determined by . In addition, the new formulation is convex and can be solved to global optimality. We solve this alternative formulation with gradient descent.

3.5 Extensions

In Secs. 3.3 and 3.4, we have described methods for estimating from then using as the soft label for training . In this section, we discuss two possible extensions applicable to all the methods: (i

) direct backpropagation for neural networks and (

ii) fixing imbalance in soft labels.

3.5.1 Direct backpropagation for neural networks

Suppose the unified classifier is a neural network. While it possible to use to train in a supervised manner, we could also consider an alternative where we directly backpropagate the loss without having to estimate first. In the case of cross-entropy (Sec. 3.3), we can think of as the probability output from , through which we can directly backpropagate the loss. In the case of matrix factorisation (Sec. 3.4), we could consider as the vector of probability (Sec. 3.4.1) or logit (Sec. 3.4.2) outputs from . Once is obtained from , we plug it in each formulation, solve for other variables (e.g., and ) with fixed, then backpropagate the loss via . Directly backpropagating the loss merges the steps of estimating and using it to train into a single step.

3.5.2 Balancing soft labels

All the methods we have discussed are based on individual samples: we estimate from of a single from the transfer set and use it to train . However, we observe that the set of estimated ’s from the whole could be imbalanced. That is, the estimated ’s may be biased towards certain classes more than others. To counter this effect, we apply the common technique of weighting the cross-entropy loss while training  [28]. The weight of each class is computed as the inverse of the mean of over all data from .

4 Experiments

In this section, we perform experiments to compare different methods for solving UHC. The main experiments on ImageNet, LSUN, and Places365 datasets are described in Sec. 4.1, while sensitivity analysis is described in Sec. 4.2.

We use the following abbreviations to denote the methods. SD for the naive extension of Standard Distillation (Sec. 3.1[17]; CE-X for Cross-Entropy methods (Sec. 3.3); MF-P-X for Matrix Factorization in Probability space (Sec. 3.4.1); and MF-LU-X and MF-LF-X for Matrix Factorization in Logit space with Unfixed and Fixed (Sec. 3.4.2), resp. The suffix ‘X’ is replaced with ‘E’ if we estimate first before using it as soft label to train ; with ‘BP

’ if we perform direct backpropagation from the loss function (Sec. 

3.5.1); and with ‘BS’ if we estimate and balance the soft labels before training (Sec. 3.5.2). In addition to the mentioned methods, we also include SD-BS as the SD method with balanced soft labels, and SPV as the method trained directly in a supervised fashion with all training data of all ’s as a benchmark. For MF-LU-X methods, we used . All methods use temperature to smooth the soft labels and logits (See (2) and [17]).

4.1 Experiment on large image datasets

In this section, we describe our experiment on ImageNet, LSUN, and Places365 datasets. First, we describe the experiment protocols, providing details on the datasets, architectures used as and , and the configurations of . Then, we discuss the results.

Dataset #Classes in () #HCs () #Classes for each HC
Random Compl. overlap.
 ImageNet 20-50 10-20 5-15
 LSUN 5-10 3-7 2-5
 Places365 20-50 10-20 5-15
Table 1: HC configurations for the main experiment

4.1.1 Experiment protocols

Datasets We use three datasets for this experiment. (i) ImageNet (ILSVRC2012) [32], consisting of 1k classes with ~700 to 1300 training and 50 validation images per class, as well as 100k unlabelled test images. In our experiments, the training images are used as training data for the ’s, the unlabelled test images as , and the validation images as our test set to evaluate the accuracy. (ii) LSUN [36], consisting of 10 classes with ~100k to 3M training and 300 validation images per class with 10k unlabelled test images. Here, we randomly sample a set of 1k training images per class to train the ’s, a second randomly sampled set of 20k images per class also from the training data is used as , and the validation data is used as our test set. (iii) Places365 [37], consisting of 365 classes with ~3k to 5k training and 100 validation images per class, as well as ~329k unlabelled test images. We follow the same usage as in ImageNet, but with 100k samples from the unlabelled test images as . We pre-process all images by centre cropping and scaling to pixels.

HC configurations We test the proposed methods under two configurations of HCs (see summary in Table 1). (i) Random classes. For ImageNet and Places365, in each trial, we sample 20 to 50 classes as and train 10 to 20 ’s where each is trained to classify 5 to 15 classes. For LSUN, in each trial, we sample 5 to 10 classes as and train 3 to 7 ’s where each is trained to classify 2 to 5 classes. We use this configuration as the main test for when ’s classify different sets of classes. (ii) Completely overlapping classes. Here, we use the same configurations as in (i) except all ’s are trained to classify all classes in . This case is used to test our proposed methods under the common configurations where all and share the same classes. Under both configurations, consist of a much wider set of classes than . In other words, a large portion of the images in does not fall under any of the classes in .

Models Each is randomly selected from one of the following four architectures with ImageNet pre-trained weights: AlexNet [22], VGG16 [33], ResNet18, and ResNet34 [16]. For AlexNet and VGG16, we fix the weights of their feature extractor portion, replace their fc layers with two fc

layers with 256 hidden nodes (with BatchNorm and ReLU), and train the

fc layers with their training data. Similarly in ResNet models, we replace their fc layers with two fc layers with 256 hidden nodes as above. In addition, we also fine-tune the last residual block. As for , we use two models, VGG16 and ResNet34, with similar settings as above.

For all datasets and configurations, we train each with 50 to 200 samples per class; no sample is shared between any in the same trial. These ’s together with are then used to train

. We train all models for 20 epochs with SGD optimiser (step sizes of 0.1 and 0.01

666For MF-P-BP, we use the rates as its loss has a smaller scale. for first and latter 10 epochs with momentum 0.9). To control the variation in results, in each trial we initialise instances of ’s from the same architecture using the same weights and we train them using the same batch order. In each trial, we evaluate the ’s of all methods on the test data from all classes in . We run 50 trials for each dataset, model, and HC configuration combination. The results are reported in the next section.

Random Classes Completely Overlapping Classes
Methods ImageNet LSUN Places365 ImageNet LSUN Places365
VGG16 ResNet34 VGG16 ResNet34 VGG16 ResNet34 VGG16 ResNet34 VGG16 ResNet34 VGG16 ResNet34
SPV (Benchmark) .7212 .6953 .6664 .6760 .5525 .5870 .7345 .7490 .6769 .7017 .5960 .6460
SD .5543 .5562 .5310 .5350 .4390 .4564 .7275 .7292 .7004 .7041 .6163 .6402
(A) Estimate methods
CE-E .6911 .6852 .6483 .6445 .5484 .5643 .7276 .7290 .7002 .7036 .6162 .6406
MF-P-E .6819 .6747 .6443 .6406 .5349 .5488 .7280 .7297 .7012 .7052 .6167 .6406
MF-LV-E .6660 .6609 .6348 .6330 .5199 .5414 .7231 .7242 .7031 .7043 .6129 .6374
MF-LF-E .6886 .6833 .6490 .6458 .5441 .5609 .7265 .7279 .7015 .7057 .6161 .6397
(B) Backprop. methods
CE-BP .6902 .6869 .6520 .6439 .5466 .5669 .7275 .7288 .7003 .7040 .6161 .6400
MF-P-BP .6945 .6872 .6480 .6417 .5471 .5609 .7277 .7287 .6999 .7019 .6146 .6384
MF-LV-BP .6889 .6847 .6495 .6389 .5467 .5681 .7229 .7225 .7001 .7046 .6113 .6369
MF-LF-BP .6842 .6840 .6523 .6445 .5383 .5624 .7239 .7252 .7020 .7034 .6104 .6366
(C) Balanced soft labels
SD-BS .6629 .6574 .6343 .6345 .5283 .5433 .7217 .7214 .6979 .7017 .6094 .6320
CE-BS .6928 .6856 .6513 .6464 .5548 .5687 .7215 .7213 .6979 .7018 .6094 .6323
MF-P-BS .6851 .6756 .6474 .6450 .5455 .5546 .7243 .7252 .6996 .7041 .6124 .6355
MF-LV-BS .6772 .6682 .6388 .6357 .5346 .5497 .7168 .7173 .7014 .7028 .6063 .6301
MF-LF-BS .6935 .6865 .6549 .6485 .5544 .5692 .7210 .7215 .6998 .7035 .6101 .6330
Table 2: Average accuracy of UHC methods over different combinations of HC configurations, datasets, and unified classifier models. (Underline bold: Best method. Bold: Methods which are not statistically significantly different from the best method.)

4.1.2 Results

Table 2

shows the results for this experiment. Each column shows the average accuracy of each method under each experiment setting, where the best performing method is shown in underlined bold. To test statistical significance, we choose Wilcoxon signed-rank test over standard deviation to cater for the vastly different settings (

e.g., model architectures, number of classes and HCs, etc.) across trials. We run the test between the best performing method in each experiment and the rest. Methods where the performance is not statistically significantly different from the best method at are shown in bold.

First, let us observe the result for the random classes case which addresses the main scenario of this paper, i.e., when each HC is trained to classify different sets of classes. We can make the following observations.

All proposed methods perform significantly better than SD. We can see that all methods in (A), (B), and (C) of Table 2 outperform SD by a large margin of 9-15%. This shows that simply setting probability of undefined classes in each HC to may significantly deteriorate the accuracy. On the other hand, our proposed methods achieve significantly better results and almost reach the same accuracy as SPV with a gap of 1-4%. This suggests the soft labels from HCs can be used for unsupervised training at a little expense of accuracy, even though contains a significant proportion of images that are not part of the target classes. Still, there are several factors that may affect the capability of from reaching the accuracy of SPV, e.g., accuracy of , their architectures, etc. We look at some of these in the sensitivity analysis section.

MF-LF-BS performs well in all cases. We can see that different algorithms perform best under different settings, but MF-LF-BS always performs best or has no statistical difference from the best methods. This suggests MF-LF-BS could be the best method for solving UHC. At the same time, CE methods offer a good trade-off between high accuracy and ease of implementation, which makes them a good alternative for the UHC problem.

Besides these main points, we also note the following small but consistent trends.

Balancing soft labels helps improve accuracy. While the improvement may be marginal (less than 1.5%), we can see that ‘BS’ methods in (C) consistently outperform their ‘E’ counterparts in (A). Surprisingly, SD-BS, which is SD with balanced soft labels, also significantly improved over SD by more than 10%. These results indicate that it is a good practice to use balanced soft labels to solve UHC. Note that while SD-BS received significant boost, it still generally underperforms compared to CE and MF methods, suggesting that it is important to incorporate the relation between and into training.

Nonconvex losses perform better with ‘BP’. Methods with suffixes ‘E’ and ‘BS’ in (A) and (C) are based on estimating before training , while ‘BP’ in (B) directly perform backpropagation from the loss function. As seen in Sec. 3, the losses of CE and MF-LF are convex in their variables while MF-P and MF-LV are nonconvex. Here, we observe a small but interesting effect that methods with nonconvex losses perform better with ‘BP’. We speculate that this is due to errors in the estimation of trickling down to the training of if the two steps are separated. Conversely in ‘BP’, where the two steps are merged into a single step, such issue might be avoided. More research would be needed to confirm this speculation. For convex losses (CE and MF-LF), we find no significant patterns between ‘E’ in (A) and ‘BP’ in (B).

Figure 3: Sensitivity analysis results. (a) Size of unlabelled set. (b) Temperature. (c) Accuracy of HCs.

Next, we discuss the completely overlapping case.

All methods perform rather well. We can see that all methods, including SD, achieve about the same accuracy (within ~1% range). This shows that our proposed methods can also perform well in the common cases of all ’s being trained to classify all classes and corroborates the claim that our proposed methods are generalisations of distillation.

Not balancing soft labels performs better. We note that balancing soft labels tends to slightly deteriorate the accuracy. This is the opposite result from the random classes case. Here, even SD-BS which receive an accuracy boost in the random classes case also performs worse than its counterpart SD. This suggests not balancing soft labels may be a better option for overlapping classes case.

Distillation may outperform its supervised counterparts. For LSUN and Places365 datasets, we see that many times distillation methods performs better than SPV. Especially for the case of VGG16, we see SPV consistently perform worse than other methods by 1 to 3% in most of the trials. This shows that it is possible that distillation-based methods may outperform their supervised counterparts.

4.2 Sensitivity Analysis

In this section, we perform three sets of sensitivity analysis on the effect of size of the transfer set, temperature parameter , and accuracy of HCs. We use the same settings as the ImageNet random classes experiment in the previous section with VGG16 as . We run 50 trials for each test. We evaluate the following five methods as the representative set of SD and top performing methods from previous section: SD, SD-BS, MF-P-BP, MF-LF-BS, and CE-BS.

Size of transfer set We use this test to evaluate the effect of the number of unlabelled samples in the transfer set . We vary the number of samples from to . The result is shown in Fig. 3a. As expected, we can see that all methods deteriorate as the size of transfer set decreases. In this test, MF-P-BP is the most affected by the decrease as its accuracy drops fastest. Still, all other methods perform better than SD in the whole test range, illustrating the robustness to transfer sets with different sizes.

Temperature In this test, we vary the temperature used for smoothing the probability (see (2) or [17]) before using them to estimate or train . The values evaluated are . The result is shown in Fig. 3b. We can see that the accuracies of SD and SD-BS drop significantly when is set to high and low values, resp. On the other hand, the other three methods are less affected by different values of .

HCs’ accuracies In this test, we evaluate the robustness of UHC methods against varying accuracy of . The test protocol is as follows. In each trial, we vary the accuracy of all ’s to 40-80%, obtain from the ’s, and use them to perform UHC. To vary the accuracy of each , we take samples per class from training data as the adjustment set, completely train each from the remaining training data, then inject increasing Gaussian noise into the last fc layer until its accuracy on the adjustment set drops to the desired value. If the initial accuracy of is below the desired value then we simply use the initial . The result of this evaluation is shown in Fig. 3c. We can see that the accuracy of all methods increase as the ’s perform better, illustrating that the accuracy of is an important factor for the performance of UHC methods. We can also see that MF-P-BP is most affected by low accuracy of while MF-LF-BS is the most robust.

Based on the sensitivity analysis, we see that MF-LF-BS is the most robust method against the number of samples in the transfer set, temperature, and accuracy of the HCs. This result provides further evidence that MF-LF-BS should be the suggested method for solving UHC. We provide the complete sensitivity plots with all methods in the supplementary material.

5 Conclusion

In this paper, we formalise the problem of unifying knowledge from heterogeneous classifiers (HCs) using only unlabelled data. We proposed cross-entropy minimisation and matrix factorisation methods for estimating soft labels of the unlabelled data from the output of HCs based on a derived probabilistic relationship. We also proposed two extensions to directly backpropagate the loss for neural networks and to balance estimated soft labels. Our extensive experiments on ImageNet, LSUN, and Places365 show that our proposed methods significantly outperformed a naive extension of knowledge distillation. The result together with additional three sensitivity analysis suggest that an approach based on matrix factorization in logit space with balanced soft labels is the most robust approach to unify HCs into a single classfier.


  • [1] Shai Avidan. Ensemble tracking. IEEE TPAMI, 29(2):261–271, 2007.
  • [2] Michael W. Berry, Murray Browne, Amy N. Langville, Paul V. Pauca, and Robert J. Plemmons. Algorithms and applications for approximate nonnegative matrix factorization. Computational statistics & data analysis, 52(1):155–173, 2007.
  • [3] Stephen Boyd, Seung-Jean Kim, Lieven Vandenberghe, and Arash Hassibi. A tutorial on geometric programming. Optimization and engineering, 8(1):67, 2007.
  • [4] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, Jan. 2011.
  • [5] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
  • [6] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
  • [7] Aeron M. Buchanan and Andrew W. Fitzgibbon. Damped newton algorithms for matrix factorization with missing data. In CVPR, 2005.
  • [8] Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In ACM SIGKDD, pages 535–541, 2006.
  • [9] Ricardo S. Cabral, Fernando De la Torre, João P. Costeira, and Alexandre Bernardino. Matrix completion for multi-label image classification. In NIPS, 2011.
  • [10] Emmanuel J. Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717, 2009.
  • [11] Alessio Del Bue, Joao Xavier, Lourdes Agapito, and Marco Paladini. Bilinear modeling via augmented lagrange multipliers (BALM). IEEE TPAMI, 34(8):1496–1508, 2012.
  • [12] Pedro A. Forero, Alfonso Cano, and Georgios B. Giannakis.

    Consensus-based distributed support vector machines.

    JMLR, 11:1663–1707, Aug. 2010.
  • [13] Yoav Freund and Robert Schapire. A short introduction to boosting.

    Journal-Japanese Society For Artificial Intelligence

    , 14(771-780):1612, 1999.
  • [14] Saurabh Gupta, Judy Hoffman, and Jitendra Malik. Cross modal distillation for supervision transfer. In CVPR, 2016.
  • [15] Trevor Hastie, Saharon Rosset, Ji Zhu, and Hui Zou. Multi-class adaboost. Statistics and its Interface, 2(3):349–360, 2009.
  • [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [17] Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. In

    NIPS Deep Learning and Representation Learning Workshop

    , 2015.
  • [18] Qifa Ke and Takeo Kanade.

    Robust l1 norm factorization in the presence of outliers and missing data by alternative convex programming.

    In CVPR, 2005.
  • [19] Josef Kittler, Mohamad Hatef, Robert P. W. Duin, and Jiri Matas. On combining classifiers. IEEE TPAMI, 20(3):226–239, 1998.
  • [20] Jakub Konečný, Brendan H. McMahan, Felix X. Yu, Peter Richtarik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. In NIPS Workshop on Private Multi-Party Machine Learning, 2016.
  • [21] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, (8):30–37, 2009.
  • [22] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.

    Imagenet classification with deep convolutional neural networks.

    In NIPS, 2012.
  • [23] Ludmila I. Kuncheva. Combining pattern classifiers: methods and algorithms. John Wiley & Sons, 2004.
  • [24] Daniel D Lee and Sebastian H. Seung. Algorithms for non-negative matrix factorization. In NIPS, 2001.
  • [25] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
  • [26] Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner. Data-free knowledge distillation for deep neural networks. In NIPS workshop on learning with limited labeled data, 2017.
  • [27] Tomasz Malisiewicz, Abhinav Gupta, and Alexei A Efros. Ensemble of exemplar-SVMs for object detection and beyond. In ICCV, 2011.
  • [28] Andrew Ng. Machine Learning Yearning, chapter 39, page 76. deeplearning.ai, 2018.
  • [29] Takayuki Okatani, Takahiro Yoshida, and Koichiro Deguchi. Efficient algorithm for low-rank matrix factorization with missing components and performance comparison of latest algorithms. In ICCV, 2011.
  • [30] Robi Polikar. Ensemble based systems in decision making. IEEE Circuits and Systems Magazine, 6(3):21–45.
  • [31] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. FitNets: Hints for thin deep nets. In ICLR, 2015.
  • [32] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211–252, 2015.
  • [33] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [34] Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple features. In CVPR, 2001.
  • [35] Zheng Xu, Yen-Chang Hsu, and Jiawei Huang.

    Learning loss for knowledge distillation with conditional adversarial networks.

    In ICLR workshop, 2017.
  • [36] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
  • [37] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba.

    Places: A 10 million image database for scene recognition.

    IEEE transactions on pattern analysis and machine intelligence, 40(6):1452–1464, 2018.

Supplementary Material


This supplementary material contains the following contents.

  • Sec. A Cross-Entropy Method and Geometric Program

  • Sec. B Alternating Least Squares (ALS) for Matrix Factorisation Methods

  • Sec. C Computation cost

  • Sec. D Complete results for sensitivity analysis

Appendix A Cross-Entropy Method and Geometric Program

In this section, we show how to transform (7) in the main paper into a geometric program [3]. First, we rewrite as follows:


Then, we can transform the following problem



subject to (23)

where we add new variables to upperbound each posynomial term in the numerator of the objective function. This turns the objective into a of a monomial and adds inequality constraints to the formulation. Since is an increasing function and its argument in the objective function is a monomial, removing from the objective does not affect the minimum. This leads us to

subject to (25)

which is a geometric program with variables and  [3]. With this formulation, we can further transform it into a convex problem with a change of variable. Here, we define for as (i.e., ). Instead of changing in (24), we directly change in (19). This transforms to


which is (9) in the main paper.

Appendix B Alternating Least Squares (ALS) for Matrix Factorisation Methods

In this section, we detail the Alternative Least Squares (ALS) [2] algorithms used for matrix factorisation in the main paper.

b.1 ALS for matrix factorisation in probability space

First, let us recall the formulation ((12) in the main paper):

subject to (28)

The ALS algorithm for solving the above formulation is shown in Alg. 1. Steps 4 and 12 are derived from the closed-form solution of and in the cost function, resp. Steps 5 and 13 project and to the nonnegative orthants to satisfy the constraints in (29). Steps 7 to 10 are for normalising to sum to per constraint (28). In fact, for this algorithm, steps 5 and 13 are actually not necessary. This is because all ’s from step 4 and ’s from step 12 are already nonnegative since they are the results of division between nonnagative numbers. For termination criteria, we terminate the algorithm if the RMSE between different iterations of and is less than . We also use the maximum number of iterations of as a termination criteria.

In terms of implementation, each for-loop can be computed with vector operations (e.g., in MATLAB or with Numpy in Python) instead of using for-loops. In addition, the factorisation of different samples can be performed in parallel on GPUs. These techniques allow a significant speed up compared with the naive implementation.

0:  ,
0:  ,
1:  Initialise
2:  while not converged do
3:     for  do
6:     end for
8:     for  do
10:     end for
11:     for  do
14:     end for
15:  end while
Algorithm 1 Matrix factorisation in probabilty space

b.2 ALS for matrix factorisation in logit space

Again, let us recall the formulation ((15) in the main paper):

subject to (31)

The ALS for solving the above formulation is shown in Alg. 2. The derivation is similar to that in Sec. B.1. That is, each step is derived via the closed-form solution of each variable, followed by appropriate projection steps. We use the same termination criteria as in previous section.

0:  , ,
0:  , ,
1:  Initialise
2:  Initialise
3:  while not converged do
4:     for  do
6:     end for
7:     for  do
10:     end for
11:     for  do
13:     end for
14:  end while
Algorithm 2 Matrix factorisation in logit space

Appendix C Computation cost

Recall that to tackle UHC, our approach comprises three steps (Sec. 3 in the main paper, second paragraph): (i) obtaining from and , (ii) estimating from , and (iii) training from and . The computation cost of different methods in the main paper differs only in step (ii), while it is the same for all methods in steps (i) and (iii). Focusing on (ii), standard distillation (Sec. 3.1) needs to compute from , while cross-entropy (Sec. 3.3) and matrix factorisation (Sec. 3.4) methods need to solve an optimisation problem, incurring much higher cost of , where is the number of optimisation iterations. However, step (ii) is parallelisable for both cross-entropy and matrix factorisation methods, and it is a fixed cost irrelevant of classifier models. In contrast, the cost of training neural networks in step (iii) significantly overwhelms this fixed cost, thus in practice the difference is almost negligible.

Appendix D Complete results for sensitivity analysis

In this section, we provide the results of sensitivity analysis of all methods. Fig. 4 shows the sensitivity result for size of transfer set ; Fig. 5 shows that of temperature ; and Fig. 6 shows that of accuracy of ’s. Note that we use different legend style from the main paper to account for more methods.

Figure 4: Sensitivity results on the size of the unlabelled set .
Figure 5: Sensitivity results on the temperature .
Figure 6: Sensitivity results on the accuracy of .