1 Introduction
This paper strives to count objects in images, whether they are people in crowds [38, 33], cars in traffic jams [7] or cells in petri dishes [21]. The leading approaches for this challenging problem count by summing the pixels in a density map [11]
as estimated with a convolutional neural network,
e.g. [3, 12, 9, 21]. While this line of work has shown to be effective, the rich source of supervision from the point annotations is only used to construct the density maps for training. The premise of this work is that point annotations can be repurposed to further supervise counting optimization in deep networks, for free.The main contribution of this paper is summarized in Figure 1. Besides creating density maps, we show that points can be exploited as free supervision signal in two other ways. The first is focus from segmentation. From point annotations, we construct binary segmentation maps and use them in a separate network branch with an accompanying segmentation loss to focus on areas of interest only. The second is focus from global density. The relative amount of point annotations in images is used to focus on the global image density through another branch and loss function. Both forms of focus are integrated with the density estimation in a single network trained endtoend with a multilevel loss. Different from standard attention [32, 4, 17, 10], where a form of focus needs to be learned for the task at hand and the learned weighing map implicitly guides the network to focus on taskrelevant features, our proposed focus learns weighing maps with a specific supervision derived for free from point annotations. Focus for free allows the counting network to explicitly emphasize meaningful features and suppress undesired ones.
Overall, we make three contributions in this paper: (i) We propose supervised focus from segmentation, a network branch which guides the counting network to focus on areas of interest. The supervision is obtained from the already provided point annotations. (ii) We propose supervised focus from global density, a branch which regularizes the counting network to learn a matching global density. Again the supervision is obtained for free from the point annotations. (iii
) We introduce a new kernel density estimator for point annotations with nonuniform point distributions. For the deep network, we design an improved encoderdecoder network to deal with varying object scales in images. Experimental evaluation on four counting datasets shows the benefits of our focus for free, kernel estimation, and endtoend network architecture, resulting in stateoftheart counting accuracy. To further demonstrate the potential of our approach for counting under varying object scales and crowding levels, we provide the first counting results on WIDER FACE, normally used for largescale face detection
[33].2 Related Work
Densitybased counting. Deep convolutional networks are widely adopted for counting by estimating density maps from images. Early works, e.g. [35, 38, 23, 29], advocate a multicolumn convolutional neural network to encourage different columns to respond to objects at different scales. Despite their success, these types of networks are hard to train due to structure redundancy [12] and conflicts resulting from optimization among different columns [26, 1].
Due to their architectural simplicity and training efficiency, single column deep networks have received increasing interest e.g. [12, 3, 19, 27, 18]. Cao et al. [3] , for example, propose an encoderdecoder network to predict highresolution and highquality density maps using a scale aggregation module. Li et al. [12] combine a VGG network with dilated convolution layers to aggregate multiscale contextual information. Liu et al. [19] rely on a single network by leveraging abundantly available unlabeled crowd imagery in a learningtorank framework. Shi et al. [27] train a single VGG network with a deep negative correlation learning strategy to reduce the risk of overfitting. We also employ single column networks, but rather than focusing solely on density map estimation, we repurpose the point annotations in multiple ways to improve counting.
Recently, multitask networks have shown to reduce the counting error [25, 24, 28, 26, 1, 17]. Sam et al. [25]
, for example, train a classifier to select the optimal regressor from multiple independent regressors for particular input patches. Ranjan
et al. [24] rely on one network to predict a high resolution density map and a helpernetwork to predict a density map at a low resolution. In this paper, we also investigate counting from a multitask perspective, but from a different point of view. We posit that the point annotations serve more purposes than just constructing density maps, and we propose network branches with supervised focus from segmentation and global density to repurpose the point annotations for free. Our focus for free benefits counting regardless of the base network, and is complementary to other stateoftheart solutions.Counting with attention. Attention mechanisms [32]
have enabled progress in a wide variety of computer vision challenges
[4, 6, 13, 37, 39]. Soft attention is the most widely used since it is differentiable and thus can be directly incorporated in an endtoend trainable network. The common way to incorporate soft attention is to add a network branch with one or more hidden layers to learn an attention map which assigns different weights to different regions of an image. Spatial and channel attention are two well explored types of soft attention [4]. Spatial attention learns a weighting map over the spatial coordinates of the feature map, while channel attention does so for the feature channels of the map.A few works have investigated densitybased counting with spatial attention [17, 10, 8]. Liu et al. [17], for example, estimate the density of a crowd by generating separate detection and regressionbased density maps. They fuse these two density maps guided by an attention map, which is implicitly learned together with the density map regression loss. While we share the notion of assisting the densitybased counting with a focus, we show in this work that such an attention does not need to be learned from scratch and instead can be derived from the existing point annotations. More specifically, we construct a segmentation map and a global density derived from the groundtruth annotated points as two additional, yet free, supervision signals for better counting.
3 Focus for Free
We formulate the counting task as a density map estimation problem, see e.g.[11, 38, 27]. Given training images , with the input image and a set of point annotations, one for each object, we use the point annotations to create a groundtruth density map by convolving the points with a Gaussian kernel,
(1) 
where denotes a pixel location, denotes a single point annotation and is a normalized Gaussian kernel with mean and an isotropic covariance . The global object count of image can be obtained by summing all pixel values within the density map , i.e., . Learning a transformation from input images to density maps is done through deep convolutional networks. Let denote such a mapping given an arbitrary deep network for image , with and the width and height of the image. In this paper, we investigate two ways that repurpose the point annotations to help supervising the network from input images to density maps. An overview of our approach, in which multiple branches are combined on top of a base network, is shown in Figure 2.
3.1 Focus from segmentation
The first way to repurpose the point annotations is to provide a spatial focus. Intuitively, pixels that are within a specific range of any point annotation should be of high focus, while pixels in undesired regions should be mostly disregarded. In the standard setup where the optimization is solely dependent on the density map, each pixel counts equally to the network loss. Given that only a fraction of the pixels are near point annotations, the loss will be dominated by the majority of irrelevant pixels. To overcome this limitation, we reuse the point annotations to create a binary segmentation map and exploit this map to provide the focused supervision through a standalone loss function.
Segmentation map.
The binary segmentation map is obtained as a function of the point annotations and their estimated variance. The binary value for each pixel
in training image is determined as:(2) 
Equation 2 states that a pixel obtains a value of one if at least one point is within its variance range as specified by a kernel estimator.
Segmentation focus. Let denote the output of the base network. We add a new branch on top of the network denoted as with network parameters . Furthermore, let denote the parameters of the base network. We propose a perpixel weighted focal loss [15] to obtain a supervised focus from segmentation for input image :
(3) 
where . The focal parameter is set to 2 throughout this network, as recommended by [15]. The segmentation branch is visualized at the top of Figure 2.
Network details. After the output of the base network, we perform a convolution layer with parameters , followed by a softmax function
to generate a perpixel probability map
. From this probability map, the second value along the first dimension represents the probability of each pixel being part of the segmentation foreground. We furthermore tile this slicetimes to construct a separate output tensor
, which will be used in the density estimation branch itself.3.2 Focus from global density
Next to a spatial focus, point annotations can also be repurposed by examining their context. It is well known that low density crowds exhibit coarse texture patterns while high density crowds exhibit very fine texture patterns [20]. Here, we exploit this knowledge for the task of counting. Given a network output , we employ a bilinear pooling layer [16, 5] to capture the feature statistics in a global context, which is known to be particularly suitable for texture and finegrained recognition [16, 5]. In this work, we match global contextual patterns to the distribution of points in training images to obtain a supervised focus from global density.
Global density. For patch in training image , its global density is given as:
(4) 
where denotes the number of point annotations in patch and denotes the global density step size, which is computed for a dataset as:
(5) 
with and the number of pixels in image and patch respectively. Intuitively, the step size computes the maximum global density over image patches and states how many global density levels are used overall.
Global density focus. With again the output of the base network, we add a second new branch with network parameters . We propose the following global density loss function:
(6) 
where is set to 2 as well. The above loss function aims to match the global density of the estimated density map with the global density of the ground truth density map. The corresponding global density branch is visualized at the bottom of Figure 2.
Network details. For network output , we first perform an outer product , followed by a mean pooling along the second dimension to aggregate the bilinear features over the image, i.e.
. The bilinear vector
is normalized, followed by signed square root normalization, which has shown to be effective in bilinear pooling [16]. Then we use a fully connected layer with parameters followed by a softmax function to make individual prediction for the global density. Furthermore, another fullyconnected layer with parametersfollowed by sigmoid function
also on top of the bilinear pooling layer is added to generate global density focus output . We note that this results in a focus over the channel dimensions, complementary to the focus over the spatial dimensions from segmentation. Akin to the focus from segmentation, we tile the output vector into , also to be used in the density estimation branch.3.3 Nonuniform kernel estimation
Both the density estimation itself and the focus from segmentation require a variance estimation for each point annotation, where the variance corresponds to the size of the object. Determining the variance for each point is difficult because of objectsize variations caused by perspective distortions. A common solution is to estimate the size (i.e. the variance) of an object as a function of the nearest neighbour annotations, e.g. the GeometryAdaptive Kernel of Zhang et al. [38]
. However, this kernel is effective only under the assumption that objects in images are uniformly distributed, which typically does not happen in counting practice. As such, we introduce a simple kernel that estimates the variance of a point annotation
by splitting an image into local regions:(7) 
where and are the hyperparameters which determine the range of point annotation centered local region , and we set their value to oneeighth of image size in our experiments. denotes an arbitrary point annotation located in . means the number of . indicates the average distance between annotated point and its nearest neighbors, and is a userdefined hyperparameter. By estimating the variance of point annotations locally, we no longer have to assume that points are uniformly distributed over the whole image.
3.4 Architecture and optimization
Network. To maximize the ability to focus and use the most accurate kernel estimation, we want the network output to be of the same width and height as the input image. Recently, encoderdecoder networks have been transferred from other visual recognition tasks [34, 14] to counting [26, 36, 24, 3]. We found that to make the encoderdecoder architectures better suited for counting, the wide variation in objectscale under perspective distortions needs to be addressed. As such, in our encoderdecoder architecture a distiller module is added between the step from encoder to decoder. The purpose of this module is to aggregate multilevel information from the encoder by distilling the most vital information for counting.
For the encoder, we make the original dilated residual network [34] suitable for our task by changing the channel of the feature maps after level 4 from 256/512 to 96 to reduce the model’s parameters for the sake of avoiding overfitting, given the low amount of training examples in counting. After the encoder, the distiller module fuses the features from level 4, 5, 7 and 8 in the encoder module by using skip connections and a concatenation operation. Then four convolution layers are used to further process the fused features to obtain a more compact representation. The reason why we do not fuse the features from level 6 is that level 6 comprises convolution layers with large dilation rates, which is prone to cause gridding artifacts [34, 30]. Compared to other works which fuse multiple networks with different kernels to deal with objectscale variations [23, 38, 29], the proposed network aggregates the features from different layers which have different receptive fields, and is much more efficient and easy to train. The decoder module uses 3 deconvolution layers with a kernel size of
and a stride size of
to progressively recover the spatial resolution. To avoid the checkerboard artifact problem caused by regular deconvolutional operation [22, 30], we add two convolution layers after each deconvolution layer. We provide a detailed ablation on the encoderdistillerdecoder network in the supplementary material.Multilevel loss. The final counting network with a focus for free contains three branches, for the pixelwise density estimation, for the binary segmentation, and for the global density prediction. Let denote the network parameters for the base network and the branches. For the density estimation, we first combine the outputs of the base network with the tiled outputs and from the focus for free. We fuse the three sources of information by elementwise multiplication and feed the fusion to a convolution layer with parameters , resulting in an output density map.
For the density estimation, the
loss is a common choice, but it is also known to be sensitive to outliers, which hampers generalization
[2]. We prefer to learn the density estimation branch by jointly optimizing the and loss, which adds robustness to outliers:(8) 
where denotes the ground truth density map. The loss functions of the three branches are summed to obtain the final objective function:
(9) 
where denote the weighting parameters of the different loss functions. Throughout this work these parameters are set to , since the loss values of the segmentation branch are typically an order of magnitude lower than the others.
4 Experimental Setup
4.1 Datasets
ShanghaiTech [38] consists of 1198 images with 330,165 people. This dataset is divided into two parts: Part_A with 482 images in which crowds are mostly dense (33 to 3139 people), and Part_B with 716 images, where crowds are sparser (9 to 578 people). Each part is divided into a training and testing subset as specified in [38]. TRANCOS [7] contains 1,244 images from different roads to count vehicles, varying from 9 to 105. We train on the given training data (403 images) and validation data (420 images) without any other datasets, and we evaluate on the test data (421 images). Dublin Cell Counting (DCC) [21] is a cell microscopy dataset, consisting of 177 images, with a cell count from 0 to 100. For training 100 images are used, the remaining 77 form the test set. WIDER FACE [33] is a recent largescale face detection benchmark. In this paper, we reuse this data for counting as a complementary crowd dataset. Compared to the existing crowd dataset like ShanghaiTech [38], WIDER FACE is more challenging due to large variations in scale, occlusion, pose, and background clutter. Moreover, it contains more images, in total , divided in 40% training, 10% validation and 50% testing. The ground truth of the test set is unavailable, so we use the validation set for testing. Each face is annotated by a bounding box, instead of a point annotation, which enables us to evaluate our kernel estimator and allows for ablation under varying object scales and crowding levels.
4.2 Implementation details
Preprocessing. For all datasets, we normalize the input RGB images by dividing all values by 255. During training, we augment the images by randomly cropping patches. No cropping is performed during testing.
Network.
We implement the proposed method with TensorFlow on a machine with a single GTX 1080 Ti GPU. The network is trained using Adam with a minibatch of 16. We set the
to 0.9,to 0.999 and the initial learning rate to 0.0001. Training is terminated after a maximum of 1000 epochs. Code and networks will be released.
Kernel computation. For datasets with dense objects, i.e. ShanghaiTech Part_A and TRANCOS, we use our proposed kernel with and . For ShanghaiTech Part_B and DCC, we set the Gaussian kernel variance to and respectively, following [27, 12]. For WIDER FACE, we obtain the Gaussian kernel variance by leveraging the box annotations. For the focus from global density, we use density levels for ShanghaiTech Part_A and 4 for the other datasets.
4.3 Evaluation metrics
Count error. We report the standardized Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) metrics given count estimates and ground truth counts [35, 38, 27]. Since these global metrics ignore where objects have been counted, we also report results using the Grid Average Mean absolute Error (GAME) metric. [7]. GAME aggregates count estimates over local regions as: with the number of images and and the ground truth and the estimated counts in a region of the image. denotes the number of grids, nonoverlapping regions which cover the full image. When is set to the GAME is equivalent to the MAE.
Density map quality. Finally, we report two standard metrics, PSNR (Peak SignaltoNoise Ratio) and SSIM (Structural Similarity in Image [31]), to evaluate the quality of the predicted density maps. We only report these two metrics on ShanghaiTech Part_A because they are not commonly reported on the other datasets.
5 Results
5.1 Focus from segmentation
We first analyze the effect of the proposed focus from segmentation. This experiment is performed on both ShanghaiTech Part_A and WIDER FACE. We compare to two baselines. The first performs counting using the base network, where the loss is only optimized with respect to the density map estimation. Unless stated otherwise, the encoderdistillerdecoder network is used as base network by default in the following experiments. The second baseline adds a spatial attention on top of this base network, as proposed in [4]. The results are shown in Table 1.
For ShanghaiTech Part_A, the base network obtains an MAE of 74.8 and the addition of spatialattention actually increases the count error to 84.5 MAE, as it fails to emphasize relevant features. In contrast, our proposed focus from segmentation can explicitly guide the network to focus on taskrelevant regions and it reduces the count error from 74.8 to 72.3 MAE.
For WIDER FACE, the box annotations allow us to perform an ablation study on the accuracy as a function of the object scale. We define the scale levels of each image as , where and denote face size and face number. We sort the test images in ascending order according to their scale level. Finally, the test images are divided uniformly into three sets: small, medium and large. In Table 1, we provide the results across multiple object scales. We observe that across all object scales, our approach is preferred, reducing the MAE from 4.7 (base network) and 4.8 (with spatial attention) to 4.3. The ablation study also reveals why spatial attention is not very effective overall; while improvements are obtained when objects are small, spatial attention performs worse when objects are large. Our segmentation focus from reused point annotations avoids such issues.
Part_A  WIDER FACE  

overall  small  medium  large  overall  
Base network  74.8  9.2  2.7  2.2  4.7 
w/ Spatial attention [4]  84.5  8.7  2.6  3.1  4.8 
w/ Segmentation focus  72.3  8.6  2.3  2.0  4.3 
5.2 Focus from global density
Next, we demonstrate the effect of our proposed focus from global density. For this experiment, we again compare to two baselines. Apart from the base network, we compare to the channel attention of [4]. For fair comparison, we replace the mean pooling used in the channel attention of [4] with bilinear pooling as used in our method for the sake of better encoding global context cues. The counting results are shown in Table 2. Channelattention can reduce the error (from 74.8 to 73.4 MAE) in ShanghaiTech Part_A compared to using the base network only, since the attention map is learned on top of a bilinear pooling layer which encodes global context cues. Our focus from global density reduces the count error further to 71.7 MAE due to more specific focus from free supervision.
To demonstrate that our focus has a lower error on different crowding levels, we perform a further ablation study on WIDER FACE. We define the crowding levels of each image as , where , , and denote face size, image size, and face number respectively. Then we sort the test images in ascending order according to their global density level. Finally, the test images are divided uniformly into three sets, sparse, medium and dense. As shown in Table 2, our method achieves the lowest error especially when scenes are sparse. This result highlights the potential complementary nature of the two forms of focus.
Part_A  WIDER FACE  

overall  sparse  medium  dense  overall  
Base network  74.8  2.1  2.5  9.5  4.7 
w/ Channel attention [4]  73.4  1.6  2.3  7.8  3.9 
w/ Globaldensity focus  71.7  0.9  1.6  8.0  3.5 
5.3 Combined focus for free
In the aforementioned experiments, we have shown that each focus matters for counting. In this experiment, we combine these two focuses for more accurate counting, in view that these two focuses aid density map estimation respectively from a local and global perspective, complementing each other. The results are shown in Table 3. The combination achieves a reduced count error of 67.9 MAE on ShanghaiTech Part_A, and obtains a reduced MAE of 3.2 on WIDER FACE. We compare our combined approach to an alternative combined attention baseline from Chen et al. [4]. While the combination of attentions achieves a better result than using the base network alone, our approach is preferred across datasets, object scales, and crowding levels.
Part_A  WIDER FACE  

overall  small  medium  large  sparse  medium  dense  overall  
Base network  74.8  9.2  2.7  2.2  2.1  2.5  9.5  4.7 
w/ Spatial & channelattention [4]  71.6  8.3  2.0  2.3  1.8  2.6  8.2  4.2 
w/ Our combined focus  67.9  7.7  1.3  0.6  0.9  1.4  7.3  3.2 
The focus for free is agnostic to the base network and to demonstrate this capability, we have applied our approach to four different base networks. Apart from our base network, we consider the multicolumn network of Zhang et al. [38], the deep single column network of Li et al. [12] and the encoderdecoder network of Cao et al. [3]. We have reimplemented these networks and use the same experimental settings as in our base network. The results in Table 4 show that our focus for free lowers the count error for all these networks on ShanghaiTech Part_A and WIDER FACE.
Part_A  WIDER FACE  

Network from  base  w/ our focus  base  w/ our focus 
Zhang et al. [38]  114.5  110.1  7.1  6.1 
Cao et al. [3]  75.2  72.7  8.5  8.2 
Li et al. [12]  74.0  72.4  4.3  3.9 
This paper  74.8  67.9  4.7  3.2 
5.4 Nonuniform kernel estimation
Next, we study the benefit of our proposed kernel for generating more reliable groundtruth density maps. For this experiment, we compare to the GeometryAdaptive Kernel (GAK) of Zhang et al. [38]. For WIDER FACE, the spatial extent of objects is provided by the box annotations and we use this additional information to measure the variance quality of our kernel compared to the baseline. The counting and variance results are shown in Table 8. The proposed kernel has a lower count error than the commonly used GAK on both ShanghaiTech Part_A and WIDER FACE. To show that this improvement is due to the better estimation of the object size of interest, we compare the estimated variances obtained by different methods with the ground truth variance obtained by leveraging the box annotations of WIDER FACE. Our kernel reduces the MAE of from 2.6 to 2.2 compared to GAK.
Part_A  WIDER FACE  

Kernel from  MAE (n)  MAE (n)  MAE () 
GAK [38]  67.9  4.2  2.6 
This paper  65.2  3.6  2.2 
Groundtruth  n.a.  3.2  n.a. 
5.5 Comparison to the stateoftheart
Part_A  Part_B  TRANCOS  DCC  WIDER FACE  

MAE  RMSE  PSNR  SSIM  MAE  RMSE  MAE  MAE  MAE  NMAE  
Zhang et al. [38]  110.2  173.2  21.4  0.52  26.4  41.3      7.1  1.10 
Marsden et al. [21]  85.7  131.1      17.7  28.6  9.7  8.4     
Shen et al. [26]  75.7  102.7      17.2  27.4         
Sindagi & Patel [29]  73.6  106.4  21.7  0.72  20.1  30.1         
Ranjan et al. [24]  68.5  116.2      10.7  16.0         
Issam et al. [9]          13.1    3.6       
Li et al. [12]  68.2  115.0  23.8  0.76  10.6  16.0  3.6    4.3  0.53 
Cao et al. [3]  67.0  104.5      8.4  13.6      8.5  1.10 
This paper  65.2  109.4  25.4  0.78  7.2  12.2  2.0  3.2  3.2  0.40 
Global count comparison. Table 6 shows the proposed approach outperforms all other models in terms of MAE on all five datasets. The proposed method achieves a new state of the art on ShanghaiTech Part_B, and a competitive result on ShanghaiTech Part_A in terms of RMSE. Shen et al. [26] achieve the lowest RMSE on ShanghaiTech Part_A, but their approach is not competitive on Part_B. Moreover, they rely on four networks with a total of 4.8 million parameters, while our proposal just needs a single network with 2.6 million parameters. On TRANCOS our method reduces the count error from 3.6 (by Issam et al. [9] and Li et al. [12]) to 2.0. A considerable reduction. For the challenging DCC dataset proposed by Marsden et al. [21], we predict a more accurate global count without any postprocessing, reducing the error rate from 8.4 to 3.2. For WIDER FACE, we evaluate using MAE and a normalized variant (NMAE). For NMAE, we normalize the MAE of each test image by the groundtruth face count. This additional metric is because counts in WIDER FACE vary from 1 to 1965. Again, our method achieves best results on both MAE and NMAE compared to the existing methods.
Local count comparison. Figure 10 shows the results obtained by various methods in terms of the commonly used GAME metric on TRANCOS. The higher the GAME value, the more counting methods are penalized for local count errors. For all GAME settings, our method sets a new stateoftheart. Furthermore, the difference to other methods increases as the GAME value increases, indicating our method localizes and counts extremely overlapping vehicles more accurately compared to alternatives.
Density map quality. To demonstrate that our method also generates better quality density maps, we provide comparative results on ShanghaiTech Part_A in terms of the PSNR and SSIM metrics. In agreement with the results in MAE and RMSE, our proposed method also achieves a better performance along this dimension. Compared to counting methods such as [12]
, which produces a density map with a reduced resolution and recovers the resolution by bilinear interpolation, our method directly learns the full resolution density maps with higher quality.
Success and failure cases. Finally, we show qualitatively some success and failure results in Figure 4. Even in challenging scenes with relatively sparse small objects or relatively dense large objects, our method is able to achieve an accurate count, as shown in the first two rows of Figure 4. Our approach fails when dealing with extremely dense scenes where individual objects are hard to distinguish, or where objects blend with the context, as shown in the last two rows of Figure 4. Such scenarios remain open challenges in counting, where further focus is required.
6 Conclusion
This paper introduces two ways to repurpose the point annotations used as supervision for densitybased counting. Focus from segmentation guides the counting network to focus on areas of interest, and focus from global density regularizes the counting network to learn a matching global density. Our focus for free aids density estimation from a local and global perspective, complementing each other. This paper also introduces a nonuniform kernel estimator. Experiments show the benefits of our proposal across object scales, crowding levels and base networks, resulting in stateoftheart counting results on five benchmark datasets. The gap towards perfect counting and our qualitative analysis shows that counting in extremely dense scenes remains an open problem. Further boosts are possible when counting is able to deal with this extreme dense scenario.
References
 [1] D. Babu Sam, N. N. Sajjan, R. Venkatesh Babu, and M. Srinivasan. Divide and grow: Capturing huge diversity in crowd images with incrementally growing cnn. In CVPR, 2018.
 [2] V. Belagiannis, C. Rupprecht, G. Carneiro, and N. Navab. Robust optimization for deep regression. In ICCV, 2015.
 [3] X. Cao, Z. Wang, Y. Zhao, and F. Su. Scale aggregation network for accurate and efficient crowd counting. In ECCV, 2018.
 [4] L. Chen, H. Zhang, J. Xiao, L. Nie, J. Shao, W. Liu, and T.S. Chua. Scacnn: Spatial and channelwise attention in convolutional networks for image captioning. In CVPR, 2017.
 [5] Y. Gao, O. Beijbom, N. Zhang, and T. Darrell. Compact bilinear pooling. In CVPR, 2016.
 [6] R. Girdhar and D. Ramanan. Attentional pooling for action recognition. In NeurIPS, 2017.
 [7] R. GuerreroGómezOlmedo, B. TorreJiménez, R. LópezSastre, S. MaldonadoBascón, and D. OnoroRubio. Extremely overlapping vehicle counting. In IbPRIA, 2015.
 [8] M. Hossain, M. Hosseinzadeh, O. Chanda, and Y. Wang. Crowd counting using scaleaware attention networks. In WACV, 2019.
 [9] H. L. Issam, R. Negar, O. P. Pedro, V. David, and S. Mark. Where are the blobs: Counting by localization with point supervision. In ECCV, 2018.
 [10] D. Kang and A. B. Chan. Crowd counting by adaptively fusing predictions from an image pyramid. In BMVC, 2018.
 [11] V. Lempitsky and A. Zisserman. Learning to count objects in images. In NeurIPS, 2010.
 [12] Y. Li, X. Zhang, and D. Chen. Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes. In CVPR, 2018.
 [13] Z. Li, K. Gavrilyuk, E. Gavves, M. Jain, and C. G. M. Snoek. VideoLSTM convolves, attends and flows for action recognition. CVIU, 166:41–50, 2018.
 [14] T.Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie. Feature pyramid networks for object detection. In CVPR, 2017.
 [15] T.Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. In ICCV, 2017.
 [16] T.Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear cnn models for finegrained visual recognition. In ICCV, 2015.
 [17] J. Liu, C. Gao, D. Meng, and A. G. Hauptmann. Decidenet: Counting varying density crowds through attention guided detection and density estimation. In CVPR, 2018.
 [18] L. Liu, H. Wang, G. Li, W. Ouyang, and L. Lin. Crowd counting using deep recurrent spatialaware network. In IJCAI, 2018.
 [19] X. Liu, J. van de Weijer, and A. D. Bagdanov. Leveraging unlabeled data for crowd counting by learning to rank. In CVPR, 2018.
 [20] A. Marana, L. da Costa, R. Lotufo, and S. Velastin. On the efficacy of texture analysis for crowd monitoring. In SIBGRAPI, 1998.
 [21] M. Marsden, K. McGuinness, S. Little, C. E. Keogh, and N. E. O’Connor. People, penguins and petri dishes: Adapting object counting models to new visual domains and object types without forgetting. In CVPR, 2018.
 [22] A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 1(10), 2016.

[23]
D. OnoroRubio and R. J. LópezSastre.
Towards perspectivefree object counting with deep learning.
In ECCV, 2016.  [24] V. Ranjan, H. Le, and M. Hoai. Iterative crowd counting. In ECCV, 2018.
 [25] D. B. Sam, S. Surya, and R. V. Babu. Switching convolutional neural network for crowd counting. In CVPR, 2017.
 [26] Z. Shen, Y. Xu, B. Ni, M. Wang, J. Hu, and X. Yang. Crowd counting via adversarial crossscale consistency pursuit. In CVPR, 2018.
 [27] Z. Shi, L. Zhang, Y. Liu, X. Cao, Y. Ye, M.M. Cheng, and G. Zheng. Crowd counting with deep negative correlation learning. In CVPR, 2018.
 [28] Z. Shi, L. Zhang, Y. Sun, and Y. Ye. Multiscale multitask deep netvlad for crowd counting. IEEE TII, 14(11):4953–4962, 2018.
 [29] V. A. Sindagi and V. M. Patel. Generating highquality crowd density maps using contextual pyramid cnns. In ICCV, 2017.
 [30] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell. Understanding convolution for semantic segmentation. In WACV, 2018.
 [31] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE TIP, 13(4):600–612, 2004.
 [32] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
 [33] S. Yang, P. Luo, C. C. Loy, and X. Tang. Wider face: A face detection benchmark. In CVPR, 2016.
 [34] F. Yu, V. Koltun, and T. A. Funkhouser. Dilated residual networks. In CVPR, 2017.
 [35] C. Zhang, H. Li, X. Wang, and X. Yang. Crossscene crowd counting via deep convolutional neural networks. In CVPR, 2015.
 [36] S. Zhang, G. Wu, J. P. Costeira, and J. M. F. Moura. Understanding traffic density from largescale web camera data. In CVPR, 2017.
 [37] S. Zhang, J. Yang, and B. Schiele. Occluded pedestrian detection through guided attention in cnns. In CVPR, 2018.
 [38] Y. Zhang, D. Zhou, S. Chen, S. Gao, and Y. Ma. Singleimage crowd counting via multicolumn convolutional neural network. In CVPR, 2016.
 [39] Z. Zhu, W. Wu, W. Zou, and J. Yan. Endtoend flow correlation tracking with spatialtemporal attention. In CVPR, 2018.
Appendix A
In this section, we provide the architecture and ablation study of encoderdistillerdecoder network, the benefit of nonuniform kernel estimation across counting networks, and additional qualitative examples of (i) our encoderdistillerdecoder network, (ii) the effect of focus from segmentation, focus from global density and our combined focus, and (iii) success and failure cases for four benchmark datasets to better understand the benefits and limitations of the proposed method.
a.1 EncoderDistillerDecoder Network
The proposed encoderdistillerdecoder network (Section 3.4 in the main paper) is visualized in Fig. 5, and an ablation study on it is elaborated next.
We perform an ablation study on ShanghaiTech Part_A to analyze the encoderdistillerdecoder network configuration. We vary the architecture by including and excluding the distiller and decoder. When relying on the encoder and distiller only, the predicted density maps are upsampled to full resolution using bilinear interpolation. Results are in Table 7.
EncoderDistiller. Adding a distiller module on top of the encoder reduces the MAE from 114.8 to 82.5. The distiller module fuses different features from multiple convolution layers with varying dilation rates, which is beneficial when counting multiple objects which appear in multiple scales in the image.
EncoderDecoder. A traditional encoderdecoder network gives a better count than just encoder and an encoderdistiller network. An encoderonly network would compress the target objects to smaller size resulting in loss of fine details. Moreover, it produces density maps with a reduced resolution due to the downsample strides in the convolution operations. The distiller can compete with the decoder to some extent, but it cannot recover the spatial resolution and important details as well as the decoder.
EncoderDistillerDecoder. Incorporating the distiller in between an encoder and decoder into a single network gives the best counting results on all metrics due to the merits of both scale invariance and detailpreserving density maps. In Fig. 6 we show qualitatively that the network obtains a lower count error and generates higher quality density maps with less noise.
Encoderdistillerdecoder  Metrics  
Encoder  Distiller  Decoder  MAE  RMSE 
114.8  178.2  
82.5  140.6  
78.8  137.4  
74.8  131.0 
a.2 Benefit of nonuniform kernel across counting networks
Next, we study the benefit of our nonuniform kernel estimation for existing counting methods. Apart from our own network, we also evaluate the benefit on two other counting networks, i.e. [38] and [27], for which code is available. Results in Table 8 demonstrate the proposed kernel has a better MAE and RMSE performance than the commonly used geometryadaptive kernel [38] for all three networks. It demonstrates our nonuniform kernel is independent of the counting model.
Zhang et al. [38]  Shi et al. [27]  This paper  

MAE  RMSE  MAE  RMSE  MAE  RMSE  
GAK [38]  110.2  173.2  73.5  112.3  67.9  115.6 
This paper  107.0  156.5  71.7  109.5  65.2  109.4 
a.3 Qualitative Results for Segment, Density & CombinedFocus
To illustrate the beneficial effect of the proposed focuses for reducing the counting error and suppressing background noise, we refer to Fig. 7. As shown in Fig. 7 (c) and Fig. 7 (d) compared to Fig. 7 (b), both segmentation focus and globaldensity focus show the ability to suppress noise and reduce the counting error. The combination of these two focuses leads to the lowest counting error and higher quality density maps with less noise as shown in Fig. 7 (e).
a.4 Success and Failure Cases
We have showed some success and failure results on WIDER DACE (Section 5.5 in the main paper). Finally we provide more qualitative results on other four datasets. Even in challenging scenes our method is able to achieve an accurate count, as shown in the first two rows of Fig. 8, 9, 10, 11. From the failure cases, as shown in the last two rows of Fig. 8, 9, 10, 11, we can see that scenes with extremely dense small objects are still a big challenge, opening up opportunities for future work.
Comments
There are no comments yet.