1 Introduction
Deep convolution neural network(CNN) models are developing rapidly and it has evolved to the stateoftheart technique [1] in image classification tasks. However, when applied to realtime applications on embedded devices where the power and storage are limited, CNN models can not meet the realtime demands because of its large amount of computation. Therefore, optimizing and accelerating these CNN models on embedded devices has become a challenge.
In view of this problem, researches have proposed a variety of compression and acceleration methods such as reducing the precision of multiplication and addition operations [2], setting the weights and inputs to binary codes [3], a skillful integration among several effective methods [4] and structure changing [5, 6, 7]. However, the computation of a CNN model mainly induced by the computation of fullyconnected(FC) layers [4]. The methods mentioned above mainly focus on the compression on the convolution layers and have not resolved the huge computation problem of FC layers.
Another classification method based on a tree classifier, which is an appropriate method for largescale recognition problems with thousands of categories, has received extensive attention and substantial development. There are several methods to construct the structure of the tree classifier such as leveraging the semantic ontologies (taxonomies) [8, 9, 10], learning label trees [11] and probabilistic label trees [12], learning visual trees [13, 14] and enhanced visual tree [15]. Compared with the FC layers in the CNN models, the tree classifier has the advantage of small amount of calculation and the computation complexity of the tree classifier is [11]. However, there has been no work that replaces FC layers with the tree classifier because most previous work construct the structure of the tree classifier by clustering directly from the image dataset. Previous methods do not utilize the information of FC layers so the accuracy is limited, which restricts the application of tree classification in accelerating depth CNN models. Moreover, this limitation also results in the separation of research on tree classification and deep CNN models and both of them can not benefit from each other.
[16] discovered that deep CNN models have visual confusions that is similar to human beings and we believe this characteristic can be used as the metric to construct the Label Tree. Therefore, we propose to use the community detection algorithm to construct the Label Tree, called Visual Confusion Label Tree(VCLT). With this method, we can fully utilize the information of FC layers in the CNN models. Compared with previous Label Tree building methods, the advantage of the VCLT is that there is no need to manually set parameters and do clustering tasks during tree construction. In addition, our VCLT is constructed directly based on the features in deep CNN models so it has a more reasonable structure which is beneficial for improving the accuracy of the tree classifier. Moreover, to the best of our knowledge, VCLT is the first effort that connects the CNN model and the Label Tree directly so the tree structure fully inherits the information contained in the FC layers.
There are two main contributions in this paper as follows.

Visual Confusion Label Tree: Our construction method is based on the hierarchical community detection algorithm. Using this algorithm on the output of FC layers we can construct a tree classifier whose structure is more suitable for deep CNN models. With this method we can improve the accuracy of the tree classifier compared with previous work by – and we can prove in theory.

Replace the FC layers with the tree classifier: After constructing the label tree, we replace the FC layers from the deep CNN models with our VCLT and we propose an effective algorithm to train our tree classifier. With this replacement we reduce the amount of computation in FC layers by – without sacrificing the accuracy of original CNNbased methods.
2 Label Tree in a Nutshell
The concept of the Label Tree was first proposed in [11] aiming at classification and a label tree is a tree with indexed nodes where
is a set of edges that are ordered pairs of parent and child node indices,
are label predictors and label sets are associated to each node. Except the root of the tree, all other nodes have one parent and arbitrary number of children. The label sets indicate the set of labels to which a point should belong if it arrives at the given node. Classifying an example begins at the root node and for each edge leading to a child one computes the score of the label predictor which predicts whether the example belongs to the set of labels . One takes the most confident prediction, traverses to that child node, and then repeats the process. Classification is complete when one arrives at a node that identifies only a single label that is the predicted class. More details about Label Trees can be found in [11, 14, 17].3 Visual Confusion Label Tree and Training
3.1 Definition of the Visual Confusion Label Tree
Definition 1.
A Visual Confusion Label Tree is a tree with hierarchical layers where denotes the number of nodes in the th layer, the node sets where is a set of nodes in the th layer and , branch edges which are ordered pairs of parent and child node indices and labels sets where is a label set of nodes in the th layer and where denotes the label set of the th node in this layer.
3.2 Visual Confusion Label Tree Establishing
Given a dataset and its corresponding classification model , Algorithm 1 establishes the VCLT defined in Definition 1 using confusion graph generation and community detection algorithm. There are 3 main steps in Algorithm 1. The first is using the confusion graph generation algorithm to build a confusion graph. The second is using the hierarchical community detection algorithm to reveal communities in the confusion graph. The last is establishing a VCLT with the results of the second step.
Specifically, for the function “GenerateConfusionGraph”, we utilize the confusion graph establishing algorithm from [16]. This algorithm firstly normalizing the top classification scores of each test sample and then accumulating each normalized score to the weight of the edge that connects the labeled category and the predicted category. For the function “HierarchicalCommunityDetect”, we use the algorithm from [18]. This algorithm is an iterative algorithm and it will continue running until the modularity is converged. This function outputs the and . The is a set of arrays which refer to the community set at each iteration and the has the same structure as the in Definition 1. The is almost the same as the except for the member in the refer to the label set of communities at each iteration. In particular, as for line in Algorithm 1, we just add the mark of the communities to instead of the vertexes in these communities.
We use the Algorithm 1 to construct a VCLT on CIFAR10 dataset and the construction process is shown in Fig. 1 where the confusion graph and the communities inside are on the left and the corresponding VCLT is on the right. The left side of Fig. 1 is divided into four steps: Initial and Iteration to
. We apply the function “GenerateConfusionGraph” to generate a confusion graph which is shown at the Initial step. Each vertex represents one category in the dataset and the weight of each edge quantifies the confusion between two connected categories. For instance, the strongest link between “dog” and “cat” denotes that the model may highly probably confuse dogs with cats. Contrarily, the weak edge connecting “dog” and “ship” indicates that the confusion between them is weak. Then we use the function “HierarchicalCommunityDetect” on the confusion graph and get the community structure of the graph from finegrained to coursegrained at each iteration of this algorithm. At Iter.
, as is illustrated in Fig. 1, we get five finegrained communities and set five corresponding nodes to the tree called “node: level”. As each member in a community refers to one category, we link the leaf nodes to level nodes. For instance, we link “node: cat” and “node: dog” to “node: level”. At Iter. , we get two coarsegrained communities based on communities detected in Iter. and each finegrained community at Iter. is a member of the coarsegrained community at Iter. . Then we link level nodes to level nodes based on this relationship. For instance, we link “node: level”, “node: level” and “node: level” to “node: level”. Similarly, at Iter. , we link “node: level” and “node: level” to “root: level” and whole finish the construction process.As is proposed in [11], in order to achieve high classification accuracy, an ideal label tree should make the finegrained categories contained in sibling leaf nodes under the same parent node as similar as possible while making the coarsegrained categories contained in parent nodes as dissimilar as possible. Our VCLT structure satisfies this because the categories in leaf nodes are strongly confused while those in parent nodes are weakly confused. Using Algorithm 1, we also construct a VCLT on CIFAR100 dataset shown in Fig. 2. Compared with the Enhanced Visual Tree (EVT) structure in [15]
, our VCLT is more reasonable. For example, EVT puts “whale”, “shark”, “skyscraper”, “rocket” and “mountain” into one coarsegrained category while our VCLT divides them into three independent coarsegrained categories. Another example is that EVT divides “bicycle” and “motorcycle” into two different coarsegrained categories but our VCLT puts them into the same finegrained category due to the strong visual similarity between them.
3.3 Visual Confusion Label Tree Classifier Training
Similar to [14], we develop a topdown approach to train classifiers on each node of the VCLT. One parent node contains a set of coarsegrained categories or a set of finegrained categories. To make full use of these features, we apply a multikernel learning algorithm to train the classifier on each node. In order to control the interlevel error propagation, we add a constraint to our learning algorithm. The constraint aims to guarantee that an image must first be assigned to its parent node (higherlevel nonleaf node) correctly if it can further be assigned to a child node (lowerlevel nonleaf node or leaf node). All these methods make the tree classifier over the VCLT more discriminative.
In order to discriminate a given coarsegrained or finegrained categories on a node from its sibling nodes under the same parent node , its multikernel SVM classifier is defined as:
(1) 
where is the level of node and is the multikernel which is defined as:
(2) 
with:
(3) 
In our method, we use common kernels such as the linear kernel, the polynomial kernel, and the Gaussian kernel.
We train each classifier node by node from the root to leaf nodes and use the strategy of SVM Plus [21] to train the multilabel SVM classifiers. Specifically, there is a set of labeled training images for sibling nodes under the same parent node , (B is the number of sibling nodes) and there are training samples ( is the level of the sibling nodes), training the multikernel SVM Plus classifiers for sibling nodes is achieved by optimizing a objective function:
(4) 
subject to:
(5) 
where indicates the slack variable, is the positive regularization parameters, is the penalty term.
One problem of label tree is that the error propagation may have negative influence on the classification result. If a classification error happens on the parent node, the prediction of this sample will be wrong because the labels of leaf nodes under this misclassified parent node are all incorrect. In order to resolve this problem, we add an interlevel constraint to the SVM plus classifiers. Our strategy is that samples should be classified correctly at the parent level(level:) if we want it to be further classified at the current node level(level:). Thus, we add a constraint to guarantee that the score of current node classifier must be larger than the score of its parent node classifier, which can be denoted by Eq.(8). Therefore, we extend Eq.(4,5) to:
(6) 
subject to:
(7) 
(8) 
4 Experiment
4.1 Datasets and Experimental Settings
We use CIFAR100 [22] and ILSVRC2012 [10] to evaluate the performance of the proposed classification method. CIFAR100 has images of categories. Each category has images in which for training and for validation. Divided into a training set and a validation set, ILSVRC2012 has over million images of categories and is commonly used to evaluate image classification algorithms. We use the training set for training and validation set for testing. The Mean Accuracy (MA) [15] is used to capture the performance of each method. A PC with Intel Core i7 and 64GB memory is utilized to run all experiments.
4.2 Comparison of different tree classifiers
In this section, we compare the classification accuracy of our proposed VCLT classifier with those of other stateoftheart tree classifiers. Trained and tested with CIFAR100 and ImageNet datasets, we compare the MA of each of the following tree classifiers: semantic ontology [8], label tree [11], visual tree [14] and the enhanced visual tree [15]. In order to train and test each model, we employ the DeCAF [23]features extracted from the FC6 layer of the AlexNet model (its first FC layer) and the classification accuracy of each tree classifier (quantified in MA) is demonstrated in Table 1.
Approaches  CIFAR100  ImageNet 

Semantic ontology  
Label tree  
Visual tree  
Enhanced visual tree  
Visual confusion label tree 
From Table 1
, we find the performance of Semantic ontology is the worst because its tree structure is constructed based on semantic space and the image classification process based on feature space. For another four methods based on feature space, the performance of Label tree is worse because of using OvR classifier to construct its tree structure, which is limited to sample imbalance and the performance of classifier. As for Visual tree, it uses the average features extracted directly from the dataset. Enhanced visual tree adopts the spectral clustering method that better reflects the diversity of categories, so its performance is better than the Visual tree. Our VCLT constructs the tree structure based on the confusion of the CNN model, which makes sibling nodes as close as possible and the parent nodes as far as possible. So this tree structure is more proper and we obtain a significant improvement over the Enhanced visual tree by
and .4.3 Comparison between our tree classifier and CNN models
In this section, we compare the classification accuracy and test time of our VCLT with those of the corresponding CNN model. We choose AlexNet and VGGVerydeep16(VGG16) for this comparison. In the AlexNetbased experiment, we firstly train an AlexNet using the CIFAR100 dataset. Then we employ the DeCAF features extracted from the FC6 layer of AlexNet to train the corresponding VCLT classifier. The classification accuracy and test time of both are shown in Table 2. For a CNN model, the “test time” in Table 2 is the average running time of its FC layers when processing one image. For the VCLT classifier, the “test time” is the average running time of the whole tree classifier when processing one image. As for the VGG16based experiment, we do the same thing except using the features extracted from the FC14 layer of VGG16 to train the VCLT classifier. Table 3 illustrates this comparison using the ImageNet dataset.
Approaches  Accuracy  Test time (ms)  Speedup 

AlexNet    
VCLT_AlexNet  
VGG16    
VCLT_VGG16 
Approaches  Accuracy  Test time (ms)  Speedup 

AlexNet    
VCLT_AlexNet  
VGG16    
VCLT_VGG16 
From Table 2 and Table 3, we can see that the classification accuracy of VCLT with AlexNet is around 3% higher than that of the original AlexNet on both CIFAR100 and ImageNet datasets. In addition, on both datasets, the speedup ratios achieved by replacing the FC layers of AlexNet with our tree classifier are significant ( on CIFAR100 and on ImageNet). Though the accuracy improvement of VCLT with VGG16 is trivial on CIFAR100 and even negative on ImageNet, the speedup ratios are remarkable, achieving and respectively, which demonstrates VCLT’s promising potential to accelerate CNNbased applications.
Approaches  Accuracy 

DeepCom  
VCLT_DeepCom  
BWN  
VCLT_BWN  
XNOR  
VCLT_XNOR 
Addtionally, we compare the classification accuracy and the speedup ratio of our VCLT with compressed CNN models. Here we choose the BinaryWeightsNet(BWN), XNORNet(XNOR) [3] and the DeepCompression Network(DeepCom) [4] for comparison. These compressed CNN models are based on AlexNet and we use their pretrained models on ImageNet in our experiment. The accuracy results are shown in Table 4, we find out that our VCLT has no accuracy decline.
The speedup ratio results are shown in Fig. 3. For DeepCom, followed [4], comparisons of speedup mainly focus on FC layers and results are shown in DeepCom_FC in Fig. . We find the speedup ratio of our VCLT is higher than DeepCom when compared with FC layers in the original AlexNet model. For BWN and XNOR, followed [3], comparisons of speedup ratio focus on both FC layers and the entire network. The results are shown respectively in BWN_FC, XNOR_FC, BWN_All and XNOR_All. We find the speedup ratio of our VCLT is higher than BWN and higher than XNOR when compared on FC layers in the original AlexNet model. As for the entire network model, our VCLT obtain an improvement over the BWN by while VCLT over the XNOR by in terms of speedup ratio.
5 Conclusion
In this paper, we propose a method of replacing the fullyconnected layers in CNN models with a tree classifier in image classification applications. We utilize the community detection algorithm to construct a Visual Confusion Label Tree based on the confusion characteristics of CNN models. Then, we use the multikernel SVM plus classifier with hierarchical constraints to train the tree classifier on the Visual Confusion Label Tree. Finally, we use this tree classifier to replace fullyconnected layers in CNN models. The experimental results on CIFAR100 and ImageNet demonstrated the advantages of the proposed method over other tree classifiers and original CNN models such as AlexNet and VGG16.
6 Acknowledgements
This work was supported by the Natural Science Foundation of China under the grant No. U1435219, No. 61402507 and No. 61303070.
References
 [1] SQ. Ren J. Sun KM. He, XY. Zhang, “Deep residual learning for image recognition,” pp. 770–778, 2015.
 [2] JP. David M. Courbariaux, Y. Bengio, “Training deep neural networks with low precision multiplications,” Computer Science, 2014.
 [3] J. Redmon A. Farhadi M. Rastegari, V. Ordonez, “Xnornet: Imagenet classification using binary convolutional neural networks,” pp. 525–542, 2016.
 [4] WJ. Dally S. Han, H. Mao, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” in ICLR, 2016.
 [5] S. Yan M. Lin, Q. Chen, “Network in network,” Computer Science, 2013.
 [6] T. Brox M. Riedmiller JT. Springenberg, A. Dosovitskiy, “Striving for simplicity: The all convolutional net,” Eprint Arxiv, 2014.
 [7] MW. Moskewicz K. Ashraf WJ. Dally K. Keutzer FN. Iandola, H. Song, “Squeezenet: Alexnetlevel accuracy with 50x fewer parameters and¡ 0.5 mb model size,” arXiv preprint arXiv:1602.07360, 2016.
 [8] Y. Lim DM. Blei FF. Li LJ. Li, C. Wang, “Building and using a semantivisual image hierarchy,” in CVPR 2010. IEEE, 2010, pp. 3336–3343.
 [9] EP. Xing B. Zhao, FF. Li, “Largescale category structure aware image categorization,” in Advances in Neural Information Processing Systems, 2011, pp. 1251–1259.
 [10] R. Socher LJ. Li K. Li FF. Li J. Deng, W. Dong, “Imagenet: A largescale hierarchical image database,” in CVPR 2009. IEEE, 2009, pp. 248–255.
 [11] D. Grangier S. Bengio, J. Weston, “Label embedding trees for large multiclass tasks,” in Advances in Neural Information Processing Systems, 2010, pp. 163–171.

[12]
M. Tappen O. Shamir C. Liu B. Liu, F. Sadeghi,
“Probabilistic label trees for efficient large scale image
classification,”
in
Proceedings of the IEEE conference on computer vision and pattern recognition
, 2013, pp. 843–850. 
[13]
N. Zhou J. Peng R. Jain J. Fan, X. He,
“Quantitative characterization of semantic gaps for learning complexity estimation and inference model selection,”
IEEE Transactions on Multimedia, vol. 14, no. 5, pp. 1414–1428, 2012.  [14] J. Peng L. Gao J. Fan, N. Zhou, “Hierarchical learning of tree classifiers for largescale plant species identification,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 4172–4184, 2015.
 [15] J. Zhang X. Gao Y. Zheng, J. Fan, “Hierarchical learning of multitask sparse metrics for largescale image classification,” Pattern Recognition, vol. 67, pp. 97–109, 2017.
 [16] Y Wang X Niu R Jin, Y Dou, “Confusion graph: Detecting confusion communities in large scale image classification,” .
 [17] AC. Berg FF. Li J. Deng, S. Satheesh, “Fast and balanced: Efficient label tree learning for large scale object recognition,” in Advances in Neural Information Processing Systems, 2011, pp. 567–575.
 [18] R. Lambiotte E. Lefebvre VD. Blondel, JL. Guillaume, “Fast unfolding of community hierarchies in large network, 2008,” J. Stat. Mech. P, vol. 1008.
 [19] Yueqing Wang, Xinwang Liu, Yong Dou, Qi Lv, and Yao Lu, “Multiple kernel learning with hybrid kernel alignment maximization,” Pattern Recognition, vol. 70, pp. 104–111, 2017.
 [20] Qiang Wang, Yong Dou, Xinwang Liu, Qi Lv, and Shijie Li, “Multiview clustering with extreme learning machine,” Neurocomputing, vol. 214, pp. 483–494, 2016.
 [21] Alexander J. Smola, Peter Bartlett, Bernhard Schölkopf, and Dale Schuurmans, “Probabilities for sv machines,” in Advances in Large Margin Classifiers, 2000, pp. 61–74.
 [22] A. Krizhevsky, “Learning multiple layers of features from tiny images,” 2009.

[23]
J. Donahue S. Karayev J. Long Y. Jia, E. Shelhamer,
“Caffe: Convolutional architecture for fast feature embedding,”
pp. 675–678, 2014.
Appendix A Theoretical Analysis of the Effect on Accuracy when Tree Structure Changed
We consider a problem of three categories classification. We assume that the categories are A, B and C and we can construct four different tree structures, which are shown in Fig.1. According to the definition of the Label Tree, tree structure is more reasonable than others for the classification task.
In order to prove it, we assume that we have trained classifiers on these tree structures. At each node of these tree structures, we use SVM for classifier and the SVM classifiers on these nodes are same. Here we assume the distances between every pair of three categories are , , and , which means category A is similar to B while C is different from both of them. Ideally, the distances in high feature space which is projected by classifier(distance of SVM) among these categories is shown in Fig. 2.
We will prove the tree structure is the most ideal tree structure. We choose one of and to be compared with because and are actually the same.
Proposition A.1.
Tree structure is better than and .
Proof.
For , we assume that denotes the probability of classifying samples in A and B from all the samples correctly at the SVM on the node . Then we define , , and so on. In addition, we define , and as the probability of classifying the three categories from all the samples correctly. Here we know:
(1) 
Because the probability that SVM makes a correct classification is in proportion to the distance of SVM. So we get:
(2) 
Similarly, for , we get:
(3) 
For , we get:
(4) 
And we know the probability of correct classification for a tree structure can be defined as:
(5) 
The probabilities for , and are:
(6) 
The SVM classifiers on the nodes of a tree structure are same so the probabilities in Eq.(6) have a same proportion, we denote it as :
(7) 
For and :
(8) 
For and :
(9) 
If then , which demands is very large. From the assumption we know that category C is far from category B and C in the distance of SVM, so .
In summary, Tree structure is better than , and .
∎
Appendix B Theoretical Analysis of the speedup ratio on replacing FullyConnected layers with the Tree Classifier
In this section we talk about the speedup ratio of our method in theory. Here we take AlexNet on CIFAR100 and ImageNet
datasets as the analysis object and we replace fullyconnected(FC) 7 and FC8 layers with the tree classifier. As we all know, each layer of the FC layers is actually a vector inner product process, which can be defined as:
(10) 
with:
(11) 
where is the output of the FC layer, is the convolution kernel, is the input of the FC layer, is the bias, their subscript denotes that it is the output of the th channel and there are channels in total. means convolution operation and denotes a convolution operation between kernel with a size of and a feature map . As for FC7 and FC8, the inputs both are feature maps with a size of and the outputs are a feature map with the size of and a score vector with the size of . We can calculate the computation using the equation mentioned above.
If we replace the FC layers with our tree classifier, then we should calculate the computation of the tree classifier. The tree classifiers have a hierarchical structure. For one node there is a classifier on each of the child nodes under it. Each classifier under the same parent node computes a result of one test image and the parent node selects which branch to go by comparing all these results. We repeat this process from root node to leaf nodes. Finally the category on the selected leaf node is the classification result. Therefore, the number() of classifiers involved in the classification process is equal to the sum of the number of child nodes under all the nodes in the path from the root node to a specific leaf node. Here we find that the worst situation is the path with the most child nodes and we find this specific number of CIFAR100 and ImageNet is and . The dimension() of the features which is used for the classifier is . Therefore, we can calculate the multiadds computation of the tree classifier on CIFAR100 dataset is:
(12) 
Similarly, we can calculate the computation on ImageNet dataset is .
We make a statistic comparison in Table 1.
Classifier  CIFAR100  ImageNet 

FC  34  42 
Tree Classifier  0.14  0.52 
Speedup  233  78 