1 Introduction
Due to its significance in several applications (like video surveillance [kang2017beyond, toropov2015traffic, sindagi2019dafe], public safety monitoring [zhan2008crowd], microscopic cell counting [lempitsky2010learning], environmental studies [lu2017tasselnet], etc
.), crowd counting has attracted a lot of interest from the deep learning research community. Several convolutional neural network (CNN) based approaches have been developed that address various issues in counting like scale variations, occlusion, background clutter
[li2015crowded, zhang2015cross, li2014anomaly, sam2017switching, sindagi2017generating, liu2018leveraging, shi2018crowd_negative, shen2018adversarial, cao2018scale, ranjan2018iterative, li2018csrnet, sindagi2017survey, sam2019locate, sam2020going, babu2018divide], etc. While these methods have achieved excellent improvements in terms of the overall error rate, they follow a fullysupervised paradigm and require several labeled data samples. There is a wide variety of scenes and crowded scenarios that these networks need to handle to in the real world. Due to a distribution gap between the training and testing environments, these networks have limited generalization abilities and hence, procuring annotations becomes especially important. However, annotating data for crowd counting typically involves obtaining pointwise annotations at head locations, and this is a labour intensive and expensive process. Hence, it is infeasible to procure annotations for all possible scenarios. Considering this, it is crucial to reduce the annotation efforts, especially for crowd counting which get deployed in a wide variety of scenarios.With the exception of a few works [change2013semi, liu2018leveraging, wang2019learning], reducing annotation efforts while maintaining good performance is relatively less explored for the task of crowd counting. Hence, in this work, we focus on learning to count using limited labeled data while leveraging unlabeled data to improve the performance. Specifically, we propose a Gaussian Process (GP) based iterative learning framework where we augment the existing networks with capabilities to leverage unlabeled data, thereby resulting in overall improvement in the performance. The proposed framework follows a pseudolabeling approach, where we estimate the pseudoground truth (pseudoGT) for the unlabeled data, which is then used to supervise the network. The network is trained iteratively on labeled and unlabeled data. In the labeled stage, the network weights are updated by minimizing the
error between predictions and the groundtruth (GT) for the labeled data. In addition, we save the latent space vectors of the labeled data along with the groundtruths. In the unlabeled stage, we first model the relationship between the latent space vectors of the labeled images along with the corresponding groundtruth and unlabeled latent space vectors jointly using GP. Next, we estimate the pseudoGT for the unlabeled inputs using the GP modeled earlier. This pseudoGT is then used to supervise the network for the unlabeled data. Minimizing the error between the unlabeled data predictions and the pseudoGT results in improved performance. Fig.
1 illustrates the effectiveness of the proposed GPbased framework in exploiting unlabeled data on two datasets (ShanghaiTechA [zhang2016single] and UCFQNRF[idrees2018composition]) in the reduced data setting. It can be observed that the proposed method is able to leverage unlabeled data effectively resulting in lower error across various settings.The proposed method is evaluated on different datasets like ShanghaiTech [zhang2016single], UCFQNRF [idrees2018composition], WorldExpo [zhang2015cross], UCSD [chan2008privacy], etc. in the reduced data settings. In addition to obtaining lower error as compared to the existing methods [liu2018leveraging], the performance drop due to less data is improved by a considerable margin. Furthermore, the proposed method is effective for learning to count from synthetic data as well. More specifically, we use labeled synthetic crowd counting dataset (GCC [wang2019learning]) and unlabeled realworld datasets (ShanghaiTech [zhang2016single], UCFQNRF [idrees2018composition], WorldExpo [zhang2015cross], UCSD [chan2009bayesian]) in our framework, and show that it is able to generalize better to realworld datasets as compared to recent domain adaptive crowd counting approaches [wang2019learning]. To summarize, the following are our contributions:

[topsep=0pt,noitemsep,leftmargin=*]

We propose a GPbased framework to effectively exploit unlabeled data during the training process, resulting in improved overall performance. The proposed method consists of iteratively training over labeled and unlabeled data. For the unlabeled data, we estimate the pseudoGT using the GP modeled during labeled phase.

We demonstrate that the proposed framework is effective in semisupervised and synthetictoreal transfer settings. Through various ablation studies, we show that the proposed method is generalizable to different network architectures and various reduced data settings.
2 Related Work
Crowd Counting. Traditional approaches in crowd counting ([li2008estimating, ryan2009crowd, chen2012feature, idrees2013multi, lempitsky2010learning, pham2015count, xu2016crowd]
) typically involved feature extraction techniques and training regression algorithms. Recently, CNNbased approaches like
[wang2015deep, zhang2015cross, sam2017switching, arteta2016counting, walach2016learning, onoro2016towards, zhang2016single, sam2017switching, sindagi2017generating] have surpassed the traditional approaches by a large margin in terms of the overall error rate. Most of these methods focus on addressing the issue of large variations in scales. Approaches like [zhang2016single, sam2017switching, sindagi2017generating] focus on improving the receptive field. Different from these, approaches like [ranjan2018iterative, sindagi2017cnnbased, sindagi2019multi, sam2018top] focus on effective ways of fusing multiscale information from deep networks. In addition to scale variation, recent approaches have addressed other issues in crowd counting like improving the quality of predicted density maps using adversarial regularization [sindagi2017generating, shen2018adversarial], use of deep negative correlationbased learning for obtaining more generalizable features, and scalebased feature aggregation [cao2018scale]. Most recently, several methods have employed additional information like segmentation and semantic priors [zhao2019leveraging, wan2019residual], attention [liu2018adcrowdnet, sindagi2019ha, sindagi2019inverse], perspective [shi2019revisiting], context information [liu2019context], multipleviews [zhang2019wide] and multiscale features [jiang2019crowd], adaptive density maps [wan2019adaptive] into the network. In other efforts, researchers have made important contributions by creating largescale datasets for counting like UCFQNRF [idrees2018composition], GCC [wang2019learning] and JHUCROWD [sindagi2019pushing, sindagi2020jhu].Learning from limited data. Recent research in crowd counting has been largely focused on improving the counting performance in the fullysupervised paradigm. Very few works like [change2013semi, liu2018leveraging, wang2019learning] have made efforts on minimizing annotation efforts. Loy et al.[change2013semi] proposed a semisupervised regression framework that exploit underlying geometric structures of crowd patterns to assimilate the count estimation of two nearby crowd pattern points in the manifold. However, this approach is specifically designed for videobased crowd counting.
Recently, Liu et al.[liu2018leveraging] proposed to leverage additional unlabeled data for counting by introducing a learning to rank framework. They assume that any subimage of a crowded scene image is guaranteed to contain the same number or fewer persons than the superimage. They employ pairwise ranking hinge loss to enforce this ranking constraint for unlabeled data in addition to the error to train the network. In our experiments we observed that this constraint is almost always satisfied, and it provides relatively less supervision over unlabeled data.
Babu et al.[sam2019almost]
focus on a different approach, where they train 99.9% of their parameters from unlabeled data using a novel unsupervised learning framework based on winnertakesall (WTA) strategy. However, they still train the remaining set of parameters using labeled data.
Wang et al.[wang2019learning] take a totally different approach to minimize annotation efforts by creating a new synthetic crowd counting dataset (GCC). Additionally, they propose a CycleGAN based domain adaptive approach for generalizing the network trained on synthetic dataset to realworld dataset. However,there is a large gap in terms of the style and also the crowd count between the synthetic and realworld scenarios. Domain adaptive approaches have limited abilities in handling such scenarios. In order to obtain successful adaptation, the authors in [wang2019learning] manually select the samples from the synthetic dataset that are closer to the realworld scenario in terms of crowd count for training the network. This selection is possible when one has information about the count from the realworld datasets, which violates the assumption of lack of unlabeled data in the target domain for unsupervised domain adaptation.
Considering the drawbacks of existing approaches, we propose a new GPbased iterative training framework to exploit unlabeled data.
3 Preliminaries
In this section, we briefly review the concepts (crowd counting, semisupervised learning and Gaussian Process) that are used in this work.
Crowd counting.
Following recent works [zhang2015cross, zhang2016single], we employ the approach of density estimation technique. That is, an input crowd image is forwarded through the network, and the network outputs a density map. This density map indicates the perpixel count of people in the image. The count in the image is obtained by integrating over the density map. For training the network using labeled data, the groundtruth density maps are obtained by imposing 2D Gaussians at head location using . Here, is the Gaussian kernel’s scale and is the list of all locations of people.
Problem formulation. We are given a set of labeled dataset of inputGT pairs () and a set of unlabeled input data samples . The objective is to fit a mappingfunction (with parameters defined by ) that accurately estimates target label for unobserved samples. Note that this definition applies to both semisupervised setting and synthetictoreal transfer setting. In the case of synthetictoreal transfer, the synthetic dataset is labeled and hence, can be used as the labeled dataset (). Similarly, the realworld dataset is unlabeled and can be used as the unlabeled dataset ().
In order to learn the parameters, both labeled and unlabeled datasets are exploited. Typically, loss functions such as
, or cross entropy error are used for labeled data. For exploiting unlabeled data , existing approaches augment with information like shape of the data manifold [oliver2018realistic] via different techniques such as enforcing consistent regularization [laine2016temporal], virtual adversarial training [miyato2018virtual] or pseudolabeling [lee2013pseudo]. In this work, we employ pseudolabeling based approach where we estimate pseudoGT for unlabeled data, and then use them for supervising the network using traditional supervised loss functions.Gaussian process. A Gaussian process (GP)
is an infinite collection of random variables, any finite subset of which have a joint Gaussian distribution. A GP is fully specified by its mean function (
) and covariance function . These are defined below:(1) 
(2) 
where denote the possible inputs that index the GP. The covariance matrix is computed from the covariance function which expresses the notion of smoothness of the underlying function. GP can then be formulated as follows:
(3) 
where
is identity matrix and
is the variance of the additive noise. Any collection of function values is then jointly Gaussian as follows
(4) 
with mean vector and covariance matrix defined by the GP as mentioned earlier. To make predictions at unlabeled points, one can compute a Gaussian posterior distribution in closed form by conditioning on the observed data. For more details, we refer the reader to [rasmussen2003gaussian].
4 GPbased iterative learning
Fig. 2 gives an overview of the proposed method. The network is constructed using an encoder and a decoder , that are parameterized by and , respectively. The proposed framework is agnostic to the encoder network, and we show in the experiments section that it generalizes well to architectures such as VGG16 [simonyan2014very], ResNet50 and ResNet101 [ren2015faster]
. The decoder consists of a set of 2 convrelu layers (see supplementary material for more details). Typically, an input crowd image
is forwarded through the encoder network to obtain the corresponding latent space vector . This vector is then forwarded through the decoder network to obtain the crowd density output , i.e, .We are given a training dataset, , where is a labeled dataset containing training samples and is an unlabeled dataset containing training samples. The proposed framework effectively leverages both the datasets by iterating the training process over labeled and unlabeled datasets . More specifically, the training process consists of two stages: (i) Labeled training stage: In this stage, we employ supervised loss function to learn the network parameters using labeled dataset, and (ii) Unlabeled training stage: We generate pseudo GTs for the unlabeled data points using the GP formulation, which is then used for supervising the network on the unlabeled dataset. In what follows, we describe these stages in detail.
4.1 Labeled stage
Since the labeled dataset comes with annotations, we employ error between the predictions and the GTs as supervision loss for training the network. This loss objective is defined as follows:
(5) 
where is the predicted output, is the groundtruth, is the intermediate latent space vector. Note that, the subscript in the above quantities indicate that these are defined for labeled data.
Along with performing supervision on the labeled data, we additionally save feature vectors ’s from the intermediate latent space in a matrix . Specifically, . This matrix is used for computing the pseudoGTs for unlabeled data at a later stage. The dimension of matrix is . Here, is the dimension of the latent space vector . In our case, the latent space vector dimension is (see supplementary material for more details), which is reshaped to . Hence, .
4.2 Unlabeled stage
Since the unlabeled data does not come with any GT annotations, we estimate pseudoGTs which are then used as supervision for training the network on unlabeled data. For this purpose, we model the relationship between the latent space vectors of the labeled images along with the corresponding GT and unlabeled latent space vectors jointly using GP.
Estimation of pseudoGT: As discussed earlier, the training process iterates over labeled and unlabeled data . After the labeled stage, the labeled latent space vectors and their corresponding GT density maps are used to model the function which maps the relationship between the latent vectors and the output density maps as, . Using GP, we model this function as an infinite collection of functions of which any finite subset is jointly Gaussian. More specifically, we jointly model the distribution of the function values of the latent space vectors of the labeled and the unlabeled samples using GP as follows:
(6) 
where is the function value computed using GP, is set equal to 1, and
is the kernel function. Based on this, the conditional joint distribution for the latent space vector
of the unlabeled sample can be expressed as the following Gaussian distribution:(7) 
where
(8) 
(9) 
where is set equal to 1 and is a kernel function with the following definition:
(10) 
Considering the large dimensionality of the latent space vector, can grow quickly in size especially if the number of labeled data samples is high. In such cases, the computational and memory requirements become prohibitively high. Additionally, all the latent vectors may not be necessarily effective since these vectors correspond to different regions of images in terms of content and size/density of the crowd. In order to overcome these issues, we use only those labeled vectors that are similar to the unlabeled latent vector. Specifically, we consider only nearest labeled vectors corresponding to an unlabeled vector. That is, we replace by in Eq. (7)(9). Here , and with being a function that finds top nearest neighbors of in .
The pseudoGT for unlabeled data sample is given by the mean predicted in Eq. (8), i.e, . The distance between the predictions and the pseudoGT is used as supervision for updating the parameters of the encoder and the decoder .
Furthermore, the pseudoGT estimated using Eq. (8) may not be necessarily perfect. Errors in pseudoGT will limit the performance of the network. To overcome this, we explicitly exploit the variance modeled by the GP. Specifically, we minimize the predictive variance by considering Eq. (9) in the loss function. As discussed earlier, using all the latent space vectors of labeled data may not be necessarily effective. Hence, we minimize the variance computed between and the nearest neighbors in the latent space vectors using GP. Thus, the loss function during the unlabeled stage is defined as:
(11) 
where is the crowd density map prediction obtained by forwarding an unlabeled input image through the network, is the pseudoGT (see Eq. (8)), and is the predictive variance obtained by replacing in Eq. (9) with .
4.3 Final objective function
5 Experiments and results
In this section, we discuss the details of the various experiments conducted to demonstrate the effectiveness of the proposed method. Since the proposed method is able to leverage unlabeled data to improve the overall performance, we performed evaluation in two settings: (i) Semisupervised settings: In this setting, we varied the percentage of labeled samples from 5% to 75%. We first show that with the base network, there is performance drop due to the reduced data. Later, we show that the proposed method is able to recover a major percentage of the performance drop. (ii) Synthetictoreal transfer settings: In this setting, the goal is to train on synthetic dataset (labeled), while adapting to realworld dataset. Unlabeled images from the realworld are available during training. In both settings, the proposed method is able to achieve better results as compared to recent methods. Details of the datasets are provided in the supplementary material.
5.1 Semisupervised settings
In this section, we conduct experiments in the semisupervised settings by reducing the amount of labeled data available during training. The rest of the samples in the dataset are considered as unlabeled samples wherever applicable. In the following subsections, we present comparison of the proposed method in the 5% setting with other recent methods. For comparison, we used 4 datasets: ShanghaiTech (SHA/B)[zhang2016single], UCFQNRF [idrees2018composition], WorldExpo [zhang2015cross] and UCSD [chan2008privacy]. This is followed by a detailed ablation study involving different architectures and various percentages of labeled data used during training. For ablation, we chose ShanghaiTechA and UCFQNRF datasets since they contain a wide diversity of scenes and large variation in count and scales.
Implementation details. We train the network using Adam optimizer with a learning rate of and a momentum of on an NVIDIA Titan Xp GPU. We use batch size of 24. During training, random crops of size are used. During inference, the entire image is forwarded through the network. For evaluation, we use mean absolute error () and mean squared error () metrics, which are defined as: and , respectively. Here, is the total number of test images, is the groundtruth/target count of people in the image and is the predicted count of people in to the image. We set aside 10% of the training set for the purpose of validation. The hyperparameter was chosen based on the validation performance. More details are provided in the supplementary.
Method  SHA  SHB  UCFQNRF  WExpo  UCSD  

MAE  MSE  AG  MAE  MSE  AG  MAE  MSE  AG  MAE  AG  MAE  MSE  AG  
ResNet50 (Oracle)  100%    76  126    8.4  14.5    114  195    10.1    1.7  2.1   
ResNet50 (only)  5%    118  211    21.2  34.2    186  295    14.2    2.2  2.8   
ResNet50+RL  5%  95%  115  208  2.0  20.1  32.9  4.0  182  291  1.7  14.0  0.01  2.2  2.8  0 
ResNet50+GP(Ours)  5%  95%  102  172  16  15.7  27.9  22  160  275  10  12.8  10  2.0  2.4  12 
Comparison with recent approaches. Here, we compare the effectiveness of the proposed method with a recent method by Liu et al.[liu2018leveraging] on 4 different datasets. In order to get a better understanding of the overall improvements, we also provide the results of the base network with (i) 100% labeled data supervision that is the oracle performance, and (ii) 5% labeled data supervision.
For all the methods (except oracle), we limited the labeled data used during training to 5% of the training dataset. Rest of the samples were used as unlabeled samples. We used ResNet50 as the encoder network. The results of the experiments are shown in Table 1
. For all the experiments that we conducted, we report the average of the results for 5 trials. The standard deviations are reported in the supplementary. We make the following observations for all the datasets: (i) Compared to using the entire dataset, reducing the labeled data during training (to 5%) leads to significant increase in error. (ii) The proposed GPbased framework is able to reduce the performance drop by a large margin. Further, the proposed method achieves an average gain (AG)
^{1}^{1}1, , of anywhere between 10%22% over the only baseline across all datasets. (iii) The proposed method is able to leverage the unlabeled data more effectively as compared to Liu et al.[liu2018leveraging]. This is because the authors in [liu2018leveraging] using a ranking loss on the unlabeled data which is based on the assumption that subimage of a crowded scene is guaranteed to contain the same or fewer number of people compared to the entire image. We observed that this constraint is satisfied naturally for most of the unlabeled images, and hence it provides less supervision (see supplementary material for a detailed analysis).%  SHA  UCFQNRF  
NoGP (only)  GP () 

NoGP (only)  GP () 


MAE  MSE  MAE  MSE  MAE  MSE  MAE  MSE  
5  118  211  102  172  16  186  295  160  275  10  
25  110  160  91  149  12  178  252  147  226  14  
50  102  149  89  148  6.1  158  250  136  218  13  
75  93  146  88  139  4.7  139  240  129  210  9.8  
100  76  126        114  195       
Ablation of labeled data percentage. We conducted an ablation study where we varied the percentage of labeled data used during the training process. More specifically, we used different settings: 5%, 25%, 50% and 75%. The remaining data were used as unlabeled samples. We used ResNet50 as the network encoder for all the settings. This ablation study was conducted on 2 datasets: ShanghaiTechA (SHA) and UCFQNRF. The results of this ablation study are shown in Table 2. It can be observed for both datasets that as the percentage of labeled data is reduced, the performance of the baseline network drops significantly. However, the proposed GPbased framework is able to leverage unlabeled data in all the cases to reduce this performance drop by a considerable margin. Fig. 3 and 4 show sample qualitative results on ShanghaiTechA and UCFQNRF datasets for the semisupervised protocol with 5% labeled data setting. It can be observed that the proposed method is able to predict the density maps more accurately as compared to the baseline method that does not consider unlabeled data.
Net  %  SHA  UCFQNRF  







MAE  MSE  MAE  MSE  MAE  MSE  MAE  MSE  
ResNet50  100  76  126        114  195      
5  118  211  102  172  16  186  295  160  275  10  
ResNet101  100  76  117        116  197        
5  131  200  110  162  18  196  324  174  288  11  
VGG16  100  74  118        120  197      
5  121  205  112  163  14  188  316  175  291  7.4 
Architecture ablation. We conducted an ablation study where we evaluated the proposed method using different architectures. More specifically, we used different networks like ResNet50, ResNet101 and VGG16 as encoder network. The ablation was performed on 2 datasets: ShanghaiTechA (SHA) and UCFQNRF. For all the experiments, we used 5% of the training dataset as labeled dataset, and the rest were used as unlabeled samples. The results of this experiment are shown in Table 3. Based on these results, we make the following observations: (i) Since networks like VGG16 and ResNet101 have higher number of parameters, they tend to overfit more in the reduceddata setting as compared to ResNet50. (ii) The proposed GPbased method obtains consistent gains by leveraging unlabeled dataset across different architectures.
PseudoGT Analysis. In order to gain a deeper understanding about the effectiveness of the proposed approach, we plot the histogram of normalized errors with respect to the predictions of the network and the pseudoGT for the unlabeled data during the training process. Specifically, we plot histograms of and , where and . Here, is the actual GT corresponding to the unlabeled data sample. The plot is shown in Fig. 5. It can be observed that the pseudoGT errors are concentrated in the lower end of the error region as compared to the prediction errors. This implies that the pseudoGTs are more closer to the GTs than the predictions. Hence, the pseudoGTs obtained using the proposed method are able to provide good quality supervision on the unlabeled data.
5.2 SynthetictoReal transfer setting
Recently, Wang et al.[wang2019learning] proposed a synthetic crowd counting dataset (GCC) that consists of 15,212 images with a total of 7,625,843 annotations. The primary purpose of this dataset is to reduce the annotation efforts by training the networks on the synthetic dataset, thereby eliminating the need for labeling. However, due to a gap between the synthetic and realworld data distributions, the networks trained on synthetic dataset perform poorly on realworld images. In order to overcome this issue, the authors in [wang2019learning] proposed a CycleGAN based domain adaptive approach that additionally enforces SSIM consistency. More specifically, they first learn to translate from synthetic crowd images to realworld images using SSIMbased CycleGAN. This transfers the style in the synthetic image to more realworld style. The translated synthetic images are then used to train a counting network (SFCN) that is based on ResNet101 architecture.
While this approach improves the error over the baseline methods, its performance is essentially limited in the case of large distribution gap between real and synthetic images. Moreover, the authors in [wang2019learning] perform a manual selection of synthetic samples for training the network. This selections ensures that only samples that are closer to the realworld images in terms of the count are used for training. Such a selection is not feasible in the case of unsupervised domain adaptation where we have no access to labels in the target dataset.
Method  SHA  SHB  UCFQNRF  UCFCC50  WExpo  

MAE  MSE  MAE  MSE  MAE  MSE  MAE  MSE  MAE  
No Adapt  160  217  22.8  30.6  276  459  487  689  42.8 
Cycle GAN [zhu2017unpaired]  143  204  24.4  39.7  257  401  405  548  32.4 
SE Cycle GAN [wang2019learning]  123  193  19.9  28.3  230  384  373  529  26.3 
Proposed Method  121  181  12.8  19.2  210  351  355  505  20.4 
The proposed GPbased framework overcomes these drawbacks easily and can be extended to the synthetictoreal transfer setting as well. We consider the synthetic data as labeled training set and realworld training set as unlabeled dataset, and train the network to leverage the unlabeled dataset. The results of this experiment are reported in Table 4. We used the same network (SFCN) and training process as described in [wang2019learning]. As it can be observed, the proposed method achieves considerable improvements compared to the recent approach. Since we estimate the pseudoGT for unlabeled realworld images and use it as supervision directly, the distribution gap that the network needs to handle is much lesser. This results in better performance compared to the domain adaptive approach [wang2019learning]. Unlike [wang2019learning], we train the network on the unlabeled data and hence, we do not need to perform any synthetic sample selection. Fig. 6 and 7 show sample qualitative results on the ShanghaiTechA and UCFQNRF datasets for the synthetictoreal transfer protocol. The proposed method is able to predict the density maps more accurately as compared to the baseline.
6 Conclusions
In this work, we focused on learning to count in the crowd from limited labeled data. Specifically, we proposed a GPbased iterative learning framework that involves estimation of pseudoGT for unlabeled data using Gaussian Processes, which is then used as supervision for training the network. Through various experiments, we show that the proposed method can be effectively used in a variety of scenarios that involve unlabeled data like learning with less data or synthetic to realworld transfer. In addition, we conducted detailed ablation studies to demonstrate that the proposed method generalizes well to different network architectures and is able to achieve consistent gains for different amounts of labeled data.
Acknowledgement
This work was supported by the NSF grant 1910141.
Supplementary Material
Due to limited space in the main paper, we present additional details about the proposed method and experiments in the supplementary.
Encoder and Decoder Architecture
Here, we provide details of the encoder and decoder architecture for all the experiments.
Encoder: In the main paper, we conducted experiments with 4 different networks for the encoder: For semisupervised experiments, we used Res50, Res101 and VGG16. For learning from synthetic data we used Res101SFCN [wang2019learning] .Following are the details:
(i) Res50: First 3 layers of Res50 are used as the encoder.
(ii) Res101: First 3 layers of Res101 are used as the encoder.
(iii) VGG16: First 10 layers of VGG16 are used as the encoder.
(iv) Res101SFCN: We use the network exactly as described in [wang2019learning]. In this network, the layers until final dilated conv layer are considered as a part of the encoder.
For all the above networks, the features of the final encoder layer are forwarded through a conv layer to reduce the dimensionality to 64 channels. The output of this conv is the feature embedding in the latent space which is used in GP modeling. Since the train crop size is , the intermediate feature maps in the latent space is of dimension .
Decoder: We use the same decoder in all the semisupervised learning experiments. The decoder consists of 2 convrelu layers. The first one is a conv layer, that takes in 64 channels and outputs 64 channels. The final layer is a a layer that takes in 64 channels and outputs 1 channel which is the density map. The final conv layer is followed by an bilinearupsampling layer that upsamples the output density to the resolution of the input image.
In case of learning from the synthetic data, since we use the same network as in [wang2019learning], all the layers after the dilated conv layers are used as decoder.
Dataset Details
In this section, we provide details of the different datasets used for evaluating the proposed method in the main paper.
ShanghaiTech [zhang2016single]:This dataset contains 1198 annotated images with a total of 330,165 people. This dataset consists of two parts: Part A with 482 images and Part B with 716 images. Both parts are further divided into training and test datasets with training set of Part A containing 300 images and that of Part B containing 400 images. Rest of the images are used as test set.
UCFQNRF [idrees2018composition]: UCFQNRF is a large crowd counting dataset with 1535 highresolution images and 1.25 million head annotations. There are 1201 training images and 334 test images. It contains extremely congested scenes where the maximum count of an image can reach 12865.
WorldExpo [zhang2015cross]: The WorldExpo’10 dataset was introduced by Zhang et al.. [zhang2015cross] and it contains 3,980 annotated frames from 1,132 video sequences captured by 108 surveillance cameras. The frames are divided into training and test sets. The training set contains 3,380 frames and the test set contains 600 frames from five different scenes with 120 frames per scene. They also provided Region of Interest (ROI) map for each of the five scenes.
UCSD [chan2008privacy]: The UCSD dataset crowd counting dataset consists of 2000 frames from a single scene. These scenes contain relatively sparse crowds with the number of people ranging from 11 to 46 per frame. A region of interest (ROI) is pro vided for the scene in the dataset. Of the 2000 frames, frames 601 through 1400 are used for training while the remaining frames are held out for testing.
GCC [wang2019learning]:GTA V Crowd Counting Dataset (GCC) is a largescale synthetic dataset based on an electronic game, which consists of 15,212 crowd images. GCC provides three evaluation strategies (random splitting, crosscamera,and crosslocation evaluation).
MAE  MSE  

0.0  102  175 
0.2  100  162 
0.4  89  149 
0.6  85  140 
0.8  88  147 
1.0  92  156 
Hyperparameter
In this section, we study the effect of on the overall performance. weighs the unsupervised loss function in the Eq. 12 of main paper. For this study, we use the ShanghaiTech A dataset, due to its wide variety of scenes and diversity in the count. We conducted this experiment for the 5% data setting where 5% of the data was used as labeled data and rest was used as unlabeled data. We used Res50 encoder. Note that we perform the evaluation on the heldout validation set (and not on the test set). The results for different values of are shown in Table 5.
We observed that the performance peaks when the value of is . corresponds to only labeled data. This is the baseline performance. As we increase , we observe that the error improves. However, for , we see a small drop. This is because the network would not have learned to optimal level at the initial stages of training. Due to this the pseudGT will be erroneous, and hence, using high weight for unsupervised at initial stages prohibits the network from reaching optimal performance.
Based on this experiment, we use for all the experiments.
Net  %  SHA  UCFQNRF  







MAE  MSE  MAE  MSE  MAE  MSE  MAE  MSE  
Res101SFCN  100  74  114        113  196        
5  128  199  109  160  17  193  323  172  282  12  
CSRNet  100  71  112        123  195      
5  120  200  111  159  14  187  310  171  293  7.0 
Method  SHA  SHB  UCFQNRF  WExpo  UCSD  
MAE  MSE  AG  MAE  MSE  AG  MAE  MSE  AG  MAE  AG  MAE  MSE  AG  
Ours  5%  95%  102 0.8  172 2.1  16  15.7 0.9  27.9 ( 1.1)  22  160 2.4  275 3.1  10  12.8 0.5  10  2.0 0.05  2.4 0.09  12 
Method  SHA  SHB  UCFQNRF  UCFCC50  WExpo  

MAE  MSE  MAE  MSE  MAE  MSE  MAE  MSE  MAE  
Ours  121 0.6  181 1.6  12.8 0.3  19.2 0.9  210 2.7  351 4.1  355 4.4  505 5.9  20.4 0.9 
Additional Architecture Ablation
In this section, we conducted additional architecture ablation experiments using two recent crowd counting techniques: CSRNet [li2018csrnet] and Res101SFCN [wang2019learning]. WE use the 5% datasetting, where we use 5% of the data as labeled and rest as unlabeled. We evaluated both these methods on ShanghaiTechA (SHA) and UCFQNRF datasets.For CSRNet, we use the layers upto the last dilated conv as the encoder. For the decoder, we use 2 conv layers as described earlier.
The results of this experiment are shown in Table 6. In addition to MAE/MSE, we rerport Average Gain (AG)^{2}^{2}2, , . We observed consistent gains in both the cases when we used the proposed GPbased method to leverage unlabeled data.
Comments
There are no comments yet.