1 Introduction
Image classification [akata2015evaluation, blot2016max, elsayed2018large, li2017improving, rastegari2016xnor, tang2015improving, wang2018ensemble, yan2012beyond]
, aiming at classifying an image into one of several predefined categories, is an essential problem in computer vision. In the last few decades, researchers focus on representing images with handcrafted lowlevel descriptors
[chen2009wld, lowe2004sift], and then discriminating them with a classifier (e.g., SVM [chang2011libsvm] or its variants [lu2007gait, maji2008classification]). However, due to the lack of highlevel features, the performance is saturating. Thanks to the availability of huge labeled datasets [lu2014twoclass, russakovsky2015imagenet] and powerful computational infrastructures, convolutional neural networks (CNNs) could automatically extract discriminative highlevel features from the training images, significantly improving the stateoftheart performance.Although highlevel features are more discriminative, adopting them alone to classify images is still challenging, since with a growing number of categories, the possibilities of confusion increase. In addition, features in the early layers are proved to be able to separate groups of classes in a higher hierarchy [bilal2017convLearnHierarchy]. Therefore, researchers attempt to combine high and lowlevel features together to exploit their complementary strengths [yu2017exploiting]. However, a simple combination of them will make the features in relatively high dimensions, hindering practical use.
Other researchers employ lowlevel features to make coarse decisions and then utilize highlevel features to make finers, based on the idea of divideandconquer. This could be achieved by designing deep decision trees that implement traditional decision trees
[quinlan1986DecisionTree] with CNNs. With the hierarchical structure of categories, a straightforward way is to make the networks of the root node to identify the coarsest category, and then dynamic route to the networks of a child node to determine the finer one recursively [kontschieder2015decForest]. However, hierarchical information of categories is not always available, therefore researchers are required to design suitable division solutions, making the training process extremely complex (e.g., multiplestaged). Besides, current deep decision tree based methods also face two other fatal weaknesses: (1) the network should save all the tree branches, making the number of parameters explosively larger than a single classification network; (2) once the decision routes to a false path, it could be hardly recovered.To resolve the above issues, we propose a novel Decision Propagation Module (DPM). The key idea is that if we adopt the early layer to generate a categorycoherent decision, and then propagate it along the network, the latter layers could be guided to encode more discriminative features. By stacking a collection of DPMs into the backbone network architectures for image classification, the generated Decision Propagation Networks (DP Nets) are explicitly formulated as to progressively encode more descriptive features guided by the decisions made by early layers and then refine the decisions based on the new generated features iteratively. In the view of residual learning [He2016ResNet], it is much easier to optimize the refining process than to optimize the making of a unreferenced new decision from scratch. Besides the advantage of easier optimization, the property of DPM enables DP Nets to overcome the weaknesses of common deep decision trees naturally. Firstly, in contrast to dynamically routing between several different branches after making a decision, DPM applies the decision as a conditional code to the latter layers similar to [mirza2014conditionalGAN]
, such that it could be propagated without bringing additional network branches. Thanks to our novel decision propagation scheme again, DP Nets could be recovered from some false decisions made before as without routing. Furthermore, instead of designating what each intermediate decision indicates explicitly, with weak supervision provided by three novel loss functions, DPM could automatically learn a more suitable and coherent division to separate the categories instead of following the manmade category hierarchy and could be trained in a totally endtoend fashion with the backbone networks. In total, our contribution is three fold:

[leftmargin=*]

We design a novel DPM, which could propagate the decision made upon an early layer to guide the latter layers.

We propose three novel loss functions to enforce DPM to make categorycoherent decisions.

We demonstrate a general way to integrate DPMs into various backbone networks to form DP Nets.
Extensive comparison results on four publicly available datasets validate DPM could consistently improve the classification performance, and is superior to the stateoftheart methods. Code will be made public upon paper acceptance.
2 Related Work
Category Hierarchy, which indicates categories form a semantic hierarchy consisting of many levels of abstraction, has been well exploited [grauman2011learning, saha2018class2str, tousch2012semanticHierarchiesSurvey]. Deng et al. [deng2014LabelRelationGraphs] introduced hierarchy and exclusion graphs that capture semantic relations between any two labels to improve classification. Yan et al. [yan2015HierarchicalCNN] proposed a twolevel hierarchical CNN with the first layer separating easy classes using a coarse category classifier and the second layer handing difficult classes utilizing fine category classifiers. To mimic the high level reasoning ability of human, Goo et al. [goo2016taxonomy] introduced a regulation layer that could abstract and differentiate object categories based on a given taxonomy, significantly improving the performance. However, the manmade category hierarchy may be not a good division in the view of CNNs.
Deep Decision Trees/Forests.
The cascade of sample splitting in decision trees has been well explored by traditional machine learning approaches
[quinlan1986DecisionTree]. With the rise of deep networks, researchers attempt to design deep decision trees or forests [zhou2017deepForest] to solve the classification problem. Since prevailing approaches for decision tree training typically operate in a greedy and local manner, hindering representation training with CNNs, Kontschieder et al. [kontschieder2015decForest]therefore introduced a novel stochastic routing for decision trees, enabling split node parameter learning via backpropagation. Without requiring the user to set the number of trees, Murthy et al.
[murthy2016decsionmininglowconfidentrecursive] proposed a “datadriven” deep decision network which stagewisely introduces decision stumps to classify confident samples and partition the remaining data, which is difficult to classify, into smaller data clusters for learning successive expert networks in the next stage. Ahmed et al. [ahmed2016NetworkOfExperts] further proposed to combine the learning of a generalist to discriminate coarse groupings of categories and the training of experts aimed at accurate recognition of classes within each specialty together and obtained substantial improvement. Instead of clustering data based on image labels, Chen et al. [chen2018SemisupervisedHierarchicalCNN]proposed a largescale unsupervised maximum margin clustering technique to split images into a number of hierarchical clusters iteratively to learn clusterlevel CNNs at parent nodes and categorylevel CNNs at leaf nodes.
Different from the above approaches that implement each decision branch with another separate routing network, Xiong et al. [xiong2015conditionalNetwork]
proposed a conditional CNN framework for face recognition, which dynamic routes via activating a part of kernels, making the deep decision tree more compact. Based on it, Baek et al.
[baek2017deepDecisionJungle] proposed a fully connected “soft” decision jungle structure to enable the decision could be recoverable, and thus lead to more discriminative intermediate representations and higher accuracies.Our DPM could be considered as a deep decision tree based approach, and the most similar work to ours is [baek2017deepDecisionJungle]
. However, the difference is at least threefold. Firstly, instead of dynamic activating a part of kernels to reduce parameters which would make each kernel work for only a part of decisions, our DPM, which adopts conditional codes to propagate decisions, could enforce each kernel to work for all the decisions, and thus make full use of the neurons. Secondly, their approach requires layers with channel number larger than the category number, which could be hardly satisfied in real cases with 1k or more categories, while our solution does not have such restriction. Last but not least, we designed three novel loss functions to enforce DPM could make categorycoherent decisions.
Belief Propagation in CNNs. Belief propagation has been well studied for a long time especially by traditional methods [conitzer2019belief, Felzenszwalb2006EfficientBP]. Actually, the concept of belief propagation has also been exploited by various deep networks. Highway networks [Srivastava2015Highway] allow unimpeded information propagates across several layers on information highways. ResNets [He2016identity] propagate the identities via the welldefined resblock structures. Compared with those skipconnection based methods that propagate the identity feature maps directly, the intermediate decisions propagated by our approach are in relatively lower dimensions but with more explicit (categorycoherent) guidance. Therefore, our designed DPM could be considered as another feasible solution for belief propagation. Besides, we will also see that DPM could be easily integrated into skipconnection based networks to further improve their performance.
3 Decision Propagation Networks
In this section, we will first define what is the categorycoherent decision, and then introduce the structure of Decision Propagation Module (DPM) together with three corresponding loss functions for training it; in addition, we will discuss the large category issue that hinders DPM training and our solution; finally we will demonstrate several exemplars of Decision Propagation Networks by integrating DPMs into some popular backbone network architectures.
3.1 The Categorycoherent Decision
Given inputs with the same object category, if their corresponding decisions are similar, then these decisions could be called the categorycoherent decisions. Note that, we also allow inputs with multiple categories to have the same decision results. In this paper, we set the categorycoherent decision with () auxiliary categories, namely , and .
3.2 Structure of Decision Propagation Module
The Decision Propagation Module (DPM) is a computational unit that is devised to make a categorycoherent early decision [zamir2017Feedback] based on the features encoded in an early layer and then propagate it to the following network layers to guide them. A diagram of DPM is shown in Fig. 1.
3.2.1 Make a Soft Decision
Give an intermediate feature map ( ) , our aim is to make a categorycoherent decision to guide subsequent network layers without bringing too much additional computational cost. Therefore, instead of continuing convolving on it, we propose to adopt global average pooling (GAP) to extremely reduce the feature dimensions. As verified in [lin2013NetInNet], the pooled feature map with channelwise statistics, is usually discriminative enough for classification. We thus adopt a fully connected network with one or two layers to make a decision based on it. To facilitate this decision branch could be optimized with the whole network structure in an endtoend manner, we apply the softmax function to the output, and thus obtain a “soft” decision .
3.2.2 Decision Propagation
To make use of the information aggregated in the intermediate decision , a straightforward idea is to dynamic route accordingly [murthy2016decsionmininglowconfidentrecursive], making the network to form a deep decision tree. As a deep decision tree will bring explosive parameter increment and could not be recovered from previous false decisions, we thus follow [mirza2014conditionalGAN]
and consider the intermediate decision as a conditional code, such that the category prediction process could be directed by conditioning on it. Specifically, we expand the decision vector
to be with the same resolution as the feature map of , by copying the decision scores directly, and then concatenate the expanded decision with as additional channels (see Fig. 1). Note that, could be itself or any other feature map outputted by a subsequent network layer, and we also allow propagating one decision to multiple layers.3.3 Loss Functions for DPM
To enable DPM to make categorycoherent decisions, we propose three novel loss functions to guide it. In the following, we will describe them one by one.
Notations. We denote as all the s in a minibatch with size of , where is the decision score (confidence) of the th auxiliary category for the th instance in the batch.
Decision Explicit Loss. If the intermediate decision made by DPM is ambiguous, the following layers could hardly get any useful information from it. Therefore, we introduce a decision explicit loss to encourage the decision score of one or several auxiliary categories to have relatively larger values, while avoiding all the auxiliary categories to have similar scores. The loss function is defined as follows:
(1) 
which is in the form of entropy to encourage the decision scores of different auxiliary categories to vary a lot.
Decision Consistent Loss. Simply enforcing the decision to be explicit is not enough. Besides, we wish the decisions for many different instances with the same original category should be consistent. Specifically, their decision scores of the same auxiliary category should be similar.
Therefore, we propose a decision consistent loss which is defined as follows:
(2) 
Denote as the indicator matrix for a batch of data: if the original category of the th instance in the batch is , then ; otherwise . Thus the mean decision score of the th auxiliary category for all the instances in the batch with original category could be calculated with the following equation:
(3) 
where is a small value to avoid dividezero error. After that, we could calculate with
(4) 
By substituting Equation 3 into Equation 4 and expand the formulation, we obtain a new equation:
(5)  
However, calculating those one by one is very timeconsuming, we therefore leverage matrix operations to accelerate them. The derived equation is as follows:
(6) 
which is a matrix with the value in th row and th column to be , namely . Operator indicates crossproduct, while all other operations are conducted elementwisely. is a two dimension matrix with , for arbitrary . Although simple, it is critical for training the module in an efficient mode.
Decision Balance Loss. Besides the above two losses, we also propose a decision balance loss to avoid the degraded situation that no matter what original category the input instance is, DPM explicitly assign it to a single auxiliary category. Therefore, the decision balance loss is to encourage all auxiliary categories could be balanced assigned, in the form of the reverse of entropy:
(7) 
3.4 Large Category Issue
For all the original categories appear in a batch, we expect their decision scores could be consistently, explicitly and balancedly distributed into all the auxiliary categories. However, due to the limitation of computational resources and the training issues of large batch SGD [goyal2017largeBatchSGD], the batch size is normally set with a small number between 1 to 256. For the tasks with 100, 1000 or even more original categories, randomly load a batch of data will result in the number of instances with each original category is only two or even smaller, resulting in could not work.
Therefore, instead of simply increasing the batch size to maintain more data samples, we propose a novel loadshufflesplit strategy which could resolve the large category issue without enlarging the batch size significantly. Specifically, given Fig. 2 as an example, where the original category number is 4 and the minibatch size is 4, too. The loadshufflesplit strategy has three key steps: (1) instead of loading 4 data samples only, we load more samples in each iteration (e.g., 8); (2) we first generate a number list with containing all the category IDs; and then shuffle it to obtain another list ; finally the shuffled list is splitted into two lists: and ; (3) we split the 8 data samples into two training batches according to the category IDs in the two splitted lists generated in the last step (see Fig. 2 again), and then train the two batches separately. In this case, the number of instances with each original category in a training batch is doubled.
3.5 Exemplar Decision Propagation Networks
Our DPM is very flexible and could be integrated into various classification network architectures to form Decision Propagation Networks (DP Nets). As it is straightforward to apply it to VGG network [Simonyan2014VGG] or AlexNet [Krizhevsky2012AlexNet], in this section we only illustrate how to integrate DPMs into modern sophisticated architectures.
For residual networks, we take ResNet [He2016ResNet] as an example. As ResNet is organized by stacking multiple residual blocks, we thus integrate DPM into each residual unit, see Fig. 3
for a demonstration. The intermediate decision is propagated along the residual branch, and thus these neurons could be guided to learn better residuals. For other popular architectures, such as Inception network, we also demonstrate how to integrate DPM into the Inception module in Fig.
4. The integration of DPM(s) with many other ResNet and Inception variants, such as ResNeXt [Xie2016ResNext], InceptionResNet [Szegedy2017InceptionResNet] could be constructed in similar schemes.4 Experiments
In this section, we evaluate the image classification performance of our approach on four publicly available datasets. Our main focus is on demonstrating DPM could improve the performance of backbone CNN networks on image classification
, but not on pushing the stateoftheart results. Therefore, we spend more space to compare our approach with popular baseline networks on three relatively smallscale datasets due to limited computational resources, and finally report our results on the ImageNet 2012 dataset
[Deng2009Imagenet] to validate the scalability of our approach.4.1 Implementation
We implement DPM and reproduce all the evaluated networks with PyTorch
[paszke2017pytorch]. The decision networkof DPM is constructed with two fully connected (FC) layers around ReLU
[Nair2010Relu]and followed with a Softmax layer to normalize the decision scores. To limit the model complexity, we reduce the dimension in the first FC layer with the reduction ratio of 16. For all the DPMs integrated into a network, we assume all their auxiliary category numbers are exactly the same to ease network construction, and we set it with 2 if not specifically stated. For VGG, we add BatchNorm (BN)
[ioffe2015batch] while with no Dropout [Srivastava2014dropout], and use one FC layer. For Inception, we choose v1 with BN. While other models are identical to the original papers.4.2 Dataset and Training Details
CIFAR10 and CIFAR100 [Krizhevsky2009Cifar] consist of 60k 32
32 images that belong to 10 and 100 categories respectively. We train the models on the whole training set with 50k images in a minibatch of 128, and evaluate them on the test set. We set the initial learning rate with 0.1, and drop it by 0.2 at 60, 120 and 160 epochs for total 200 epochs. For data augmentation, we pad 4 pixels on each side of the image, and randomly sample a 32
32 crop from the padded image or its horizontal flip, and then apply the simple mean/std normalization.CINIC10 [storkey2018cinic10] contains 270k 3232 images belonging to 10 categories, equally split into three subsets: train, validation, and test. We train the models on the train set with a minibatch of 128 and evaluate them on the test set. The training starts with an initial learning rate of 0.1, and cosine annealed to zero for total 300 epochs, based on the same data augmentation scheme as in CIFARs.
ImageNet [Deng2009Imagenet] consists of 1.2 million training images and 50k validation images from 1k classes. We train the models with minimal data augmentation including random resized crop, flip and the simple mean/std normalization on the whole training set and report results on the validation set. The initial learning rate is set to 0.1 and decreased by a factor of 10 every 30 epochs to a total of 100 epochs.
During training, the three loss functions are calculated for all the DPMs in the network, and the average of each loss is accumulated with the traditional crossentropy loss for classification. We set the weighting of the three loss terms to be 0.01 on ImageNet while 0.1 on others. All the models are trained from scratch with SGD using default parameters as the optimizer, and the weights are initialized following [he2015Init]. We evaluate the singlecrop performance at each epoch and report the best one.
4.3 CIFAR and CINIC10 Experiments
To evaluate the effectiveness of DPM, we first perform extensive ablation experiments on three relatively small datasets to verify that DP Nets with integrating DPMs outperform the corresponding baseline networks without bells and whistles, and then compare them with the stateoftheart methods to demonstrate the superiority.
Architecture 


Acc (%)  
Top1  Top5  
ResNet20      68.82  91.03  
ResNet20  10  1280  64.88  89.54  
DPResNet20  10  1280  65.34  89.87  
DPResNet20  25  512  70.51  92.25  
DPResNet20  50  256  69.80  91.98  
DPResNet20  100  128  69.50  91.69  
ResNet56      72.23  92.36  
ResNet56  10  1280  54.13  78.65  
DPResNet56  10  1280  53.73  78.86  
DPResNet56  25  512  73.76  93.39  
DPResNet56  50  256  73.58  93.24  
DPResNet56  100  128  73.41  93.18 
Architecture 

Acc (%)  

Top1  Top5  
DPResNet20  2  70.51  92.52  
DPResNet20  5  69.49  91.65  
DPResNet20  10  69.91  91.56  
DPResNet20  25  69.41  91.67  
DPResNet56  2  73.76  93.39  
DPResNet56  5  73.86  93.28  
DPResNet56  10  73.51  93.32  
DPResNet56  25  73.02  92.95 
4.3.1 Category Number in Each Batch
As mentioned before, to handle the large category issue that affects the estimation of decision consistent loss, we proposed a loadshufflesplit strategy. In this part, we will evaluate the effects brought by the strategy. Since CIFAR100 consists of images with 100 categories while the minibatch size is 128, we thus choose it to investigate. Specifically, we make 4 different configurations, with each configuration has around 128 images in a training batch for fair comparisons.
The results in Tab. 1 show that DPResNet20 and DPResNet56 both obtain the best results when enforcing each training batch with images in 25 categories, instead of the default 100 categories. Therefore we could conclude that our loadshufflesplit strategy is useful for training DP Nets on large category datasets. However, when the number of categories in a training batch keeps decreasing, the performance drops heavily. The reason is that CNNs require the data to be i.i.d. distributed, but the reduction of category number in each batch will hurt the distribution, thus degrading the performance. To validate this, we also conduct experiments on the baseline ResNet20 and ResNet56 with 10 categories in each batch and find the performance also drops. Even though, our approach could leverage it to handle the large category issue.
Architecture  Configuration  Acc (%)  

Top1  Top5  
DPResNet20  w/o  70.31  92.11 
DPResNet20  w/o  70.07  92.05 
DPResNet20  w/o  69.44  91.88 
DPResNet20    70.51  92.25 
DPResNet56  w/o  73.27  93.19 
DPResNet56  w/o  73.52  93.15 
DPResNet56  w/o  72.87  92.73 
DPResNet56    73.76  93.39 
4.3.2 Auxiliary Category Number
To investigate the effects of auxiliary category number in DPM, we follow the best configuration in the above experiments that set the category number in each training batch as 25, and report the experimental results in Tab. 2
. It could be seen that the performance with 2 auxiliary categories is very good and stable, while those with larger auxiliary categories vary a lot for the two different models. The reason is probably that current supervision is not enough to enforce DPM to make use of more auxiliary categories. Besides, we will show that it is the decision scores that encode some meaningful information about the original category, rather than the auxiliary category itself (see Sec.
4.3.6). Therefore, we simply choose the auxiliary category number to be 2.4.3.3 Three Loss Functions
These experiments are to evaluate the influence of each loss function for training DPM by ablating one of them. The results depicted in Tab. 3 show that the performance drops if we ablate any loss function. Particularly, has the largest influence on the classification results and the accuracies drop nearly 1% for both models, which probably indicates DPM is easily degenerated to consistently and explicitly assigning all pooled feature maps into a single auxiliary category. In addition, we would like to point out that DPResNets with ablating one loss function could still outperform the baseline networks whose results are reported in Tab. 1, validating the effectiveness of DPM.
Architecture  #params  #MACs  Acc (%)  
CIFAR10  CIFAR100  CINIC10  
NIN [lin2013NetInNet]  996.99k  0.22G  89.71  67.76  80.10 
DDN [murthy2016decsionmininglowconfidentrecursive] *      90.32  68.35   
DCDJ [baek2017deepDecisionJungle] *        68.80   
DPNIN  997.9k  0.23G  90.87(1.16)  69.11(1.35)  81.07(0.97) 
ResNet56 [He2016ResNet]  855.77k  0.13G  93.62  72.23  84.74 
SEResNet56 [Hu2017SENet]  861.82k  0.13G  94.28  73.81  85.09 
DPResNet56  894.96k  0.14G  94.35 (0.73)  73.76 (1.53)  85.50 (0.76) 
ResNet110 [He2016ResNet]  1.73M  0.26G  93.98  73.94  85.18 
SEResNet110 [Hu2017SENet]  1.74M  0.26G  94.58  74.42  85.57 
DPResNet110  1.81M  0.28G  94.56(0.58)  74.85 (0.91)  86.34 (1.16) 
GoogLeNet [Szegedy2015Inception]  6.13M  1.53G  95.27  79.41  87.89 
DPGoogLeNet  6.35M  1.54G  95.65(0.38)  80.73(1.32)  88.31(0.42) 
VGG13 [Simonyan2014VGG]  9.42M  0.23G  94.18  74.42  85.04 
DPVGG13  9.49M  0.23G  94.61 (0.43)  74.94 (0.52)  85.59(0.55) 
4.3.4 Comparisons with the Stateoftheart Methods
We conduct extensive experiments on the three challenging datasets: CIFAR10, CIFAR100 and CINIC10, with various popular architectures, including ResNets [He2016ResNet], Network in Network (NIN) [lin2013NetInNet], GoogLeNet [Szegedy2015Inception] and VGG [Simonyan2014VGG] as backbones. The results demonstrated in Tab. 4 show that by integrating DPMs, all the networks could consistently obtain significant better performance (e.g., more than 1.5% improvement for DPResNet56 on CIFAR100), validating the effectiveness and versatility of DPM. Particularly, we would like to point out that DPResNet56 outperforms the original ResNet110 on both the CIFAR10 and CINIC10 datasets, but with nearly half the numbers of parameters and multiplyandaccumulates (MACs). In addition, we also compare our approach with two latest decision tree based methods: Deep Convolutional Decision Jungle (DCDJ) [baek2017deepDecisionJungle] and Deep Decision Network (DDN) [murthy2016decsionmininglowconfidentrecursive]. Since they have not released their codes, we thus compare ours with the results reported in the original papers. It could be seen that our DPNIN outperforms DCDJ and DDN with all using NIN as the backbone network. Finally, we compare our DPM with the most advanced SE block [Hu2017SENet], whose motivation is to improve the performance of various backbone architectures in the manner of attention. From Tab. 4, we could see that DPM is comparable with the SE block on classification and sometimes is superior to it.
4.3.5 Complexity Analysis
To enable practical use, DPM is expected to provide an effective tradeoff between complexity and performance. Therefore, we report the statistics of complexity in Tab. 4. It could be seen that, by integrating DPMs, the increased numbers of parameters and multiplyandaccumulates (MACs) are less than 5% of their original ones. While in several previous subsections, we have validated that the brought improvements are significant. Therefore, we could conclude that the overhead brought by DPM is deserved.
4.3.6 Visualization and Discussion
To investigate what DPM learns, we visualize the decisions made by the 9 DPMs of DPResNet20 on 512 images from the CIFAR10 dataset in Fig. 5, with DPM of the earliest layer to the latest layer located from left to right in sequence. For the decisions made by each DPM, all 512 images are visualized with their positions related to the decision scores assigned to them: the larger decision score assigned to the first auxiliary category (totally two), the more bottom position the image is located, while all the images are randomly spanned across the horizontal direction. Since with limited supervision, the decision scores are concentrated in a small range instead of the whole , but we will show it is enough to distinguish the categories.
We could see the images in the first column are distributed along the vertical direction with “blue” images located in the above, and “green” images in the below, indicating that the first DPM probably makes decisions based on the color information. Although simple, frogs and airplanes are separated quite well, validating that lowlevel information is useful for classification. The second DPM seems to not work, assigning equal decision scores to the two auxiliary categories. This behavior is similar to ResNet that allows some gated shortcuts to be closed. Interestingly, the 35th DPMs make almost reversed decisions (e.g., the score is about one minus the score made by the other DPM), indicating that the auxiliary categories made by different DPMs could be different, and neural networks have the ability to decode these decisions. We rotate the decisions in the 5th column, and find it is somewhat consistent with the decisions in the first column, but has better semantic clustering along the vertical direction. For example, all the airplanes (see the red circle in Fig. 5) are located in the above part, while a “green” airplane is located in the below part by mistake within the first decisions (see the black rectangle in the first zoomed view in Fig. 5), validating that our approach could be recovered from some false decisions made before. We also visualize the decisions made by the last DPM, and find that the instances with the same categories (e.g., airplane, automobile) are located within a quite small range along the vertical axis and are well separated with some other categories, therefore we could conclude it is the decision score that encodes some meaningful information about the object category, rather than the auxiliary category itself. From the three zoomed views, we could see that the decisions are progressively refined, validating our intuition to propagate decisions. Particularly, trucks and automobiles are located in similar vertical ranges, which could be considered as belonging to a coarse category “manmade objects” mentioned by the category hierarchy. However, other “manmade objects”, such as airplanes and ships, are mixed with the objects belonging to the coarse category “animals”. Therefore, we conclude that the decision made by DPM is not based on the manmade category hierarchy, but another division that is better in the view of CNNs.
Architecture 


Acc(%)  

Top1  Top5  
ResNet18    256  68.06  88.55  
DPResNet18    256  68.65  88.83  
DPResNet18  250  1024  69.10  89.03  
ResNet50    256  74.42  91.97  
DPResNet50    256  75.47  92.75  
DPResNet50  250  768  76.15  92.98  
GoogLeNet    256  70.68  90.08  
DPGoogLeNet    256  71.22  90.37  
DPGoogLeNet  250  1024  71.66  90.54  
ResNet101    256  76.66  93.23  
DPResNet101    256  77.59  93.81  
DPResNet101  200  512  78.21  93.92 
4.4 ImageNet Experiments
We also evaluate the performance of various DP Nets on ImageNet [Deng2009Imagenet]. The results depicted in Tab. 5 show that DP Nets outperform all the baseline networks with large margins (e.g., 0.6% for ResNet18 and GoogLeNet, 1.0% for ResNet50 and ResNet101) that randomly sample all 1000category instances into a training batch, in which case could hardly contribute to the training. Therefore, we also conduct experiments with setting batch sizes to be 1024, 768 and 512, while enforcing the category number in a batch to be 250, 250 and 200 using the loadshufflesplit strategy. We could see that the improvements on top1 accuracy for all DP Nets are finally enlarged to 1.0%  1.6%, validating the scalability of our approach.
5 Conclusion
We have presented the Decision Propagation Module (DPM), a novel dropin computational unit that could propagate the categorycoherent decision made upon an early layer of CNNs to guide the latter layers for image classification. Decision Propagation Networks generated by integrating DPMs into existing classification networks could be trained in an endtoend fashion, and bring consistent improvements with minimal additional computational cost. Extensive comparisons validate the effectiveness and superiority of our approach. We hope DPM become an important component of various networks for image classification. In the future, we plan to extend our approach to handle more vision tasks, e.g., detection and segmentation.
Comments
There are no comments yet.