Over the past few years, crowd understanding has become increasingly influential in the field of computer vision. Because it plays an important role in social management, such as video surveillance, public area planning, crowd congestion warning, traffic flow monitoring and so on[16, 23, 21]. Crowd counting is a foundation of crowd understanding, this task strives to understand people from images, predict density maps and estimate the number of pedestrians for crowd scenes. At present, many CNN-based approaches [9, 25, 2, 5, 4] have achieved phenomenal performance, and their success is driven by the availability of public crowd datasets. Unfortunately, the existing datasets (such as Shanghai Tech A/B , and UCF-QNRF , etc.) are so small-scale that makes it difficult for trained models to perform well in other scenarios. The high dependency on annotated data makes it difficult to deploy to social management.
To address the problem of data scarcity in crowd counting, some works [1, 10, 15] explore unsupervised or weakly supervised crowd counting, but these methods did not completely get rid of the dependence on manually annotated data. Inspired by the application of synthetic data [13, 14] in other visual fields, a large-scale crowd counting dataset named GCC was established by Wang et al. , where the data is generated and annotated automatically by a computer game mod. Although this novel data generation method solves the challenge of manually labeling data, one problem that comes with it is that the gap between the synthetic scenes and the real-world scenes is too large. Therefore, a well-trained model on the GCC dataset doesn’t work well in the real world. Some of the recent work [6, 18, 11, 8, 12] provided us with a domain adaptation strategy, using image style transfer networks to narrow the domain gap. For the cross-domain crowd counting problem,  also proposed a domain adaptation method to make synthetic data closer to real data in visual perception via the SE Cycle GAN  network. This method of using image style transfer has achieved better results than the CNN models without domain adaptation, but the regression error in the background area reduces counting performance.
In this paper, the central issue is how to design a better domain adaptation scheme for reducing the counting noise in the background area. After observing a large number of images in the GCC 
dataset, we find that the characters in the synthetic scene and the real scene have a high degree of visual similarity, while the background has a large gap. This similarity is more obvious in the high-level semantic representation. We assume that if the network can pay more attention to the semantic consistency of the crowd, it will help to narrow the domain gap. To make our adapted model can extract the semantic consistency feature for synthetic and real-world data, we first introduce a semantic extractor by further exploiting the semantic label. Considering that GCC dataset provides the mask for crowd area, correspondingly, we train a segmentation model and get the semantic label for real datasets in free. Furthermore, we adopt adversarial learning to align the semantic space. Based on the above two methods, our domain adaption framework is formed. A detailed description of the network will be in the second 2.
In summary, our main contributions of this paper are as follows:
We exploit a large-scale human detection dataset to train a crowd semantic model, which can generate crowd semantic labels for all real crowd datasets.
We propose a domain adaptive network based on semantic consistency, which strives to focus on the consistent feature of the cross-domain crowd.
We apply our framework to three real datasets, and it yields a new record on the across-domain crowd counting problem.
2 Proposed Framework
The overall architecture of our proposed domain adaptation framework is illustrated in Fig.1
. In this section, we first describe the details of the architecture, then introduce the various loss functions, and finally, we give the training details of the framework.
2.1 Framework Details
For the sake of understanding, some symbolic definitions are given here. In this paper, the number of available annotation data is source domain images and target domain images, denoted as and respectively. Each image has RGB three-channel pixels with height , width . The source domain image has one-channel per-pixel label of head position and crowd mask , while the target image only has a roughly crowd mask . As shown in Fig. 1 (a), (b), (c) and (d), there are four sub-networks, namely, the common feature map extractor , the crowd density estimator , the crowd semantic extractor , and the feature discriminator . They are parameterized by , , and , respectively.
Feature Map Extractor Considering that the VGG network has less computation and strong feature representation ability, we choose a pre-trained VGG16 
with batch normalization as the frond-end feature map extractor. It is also fair to compare with the previous work. Given a pair of imagesand as input, the output produced by the feature extractor can be represented by the following mapping:
Semantic Extractor(SE) The semantic extractor is designed to predict the crowd mask. We propose a crowd semantic extractor by slightly modifying the pyramid module of PSPNet proposed by Zhao et al. , which fuses features under four different pyramid scales to aggregate context information for different regions. The fusion feature outputs the final semantic prediction through the module, which is defined as follows:
Feature Discriminator(FD) The feature discriminator achieves domain adaptation by identifying whether the input features come from the source domain or the target domain. We use an architecture similar to . The network consists of convolution layers with kernel
and stride of. Given the features and , we forward and to a feature discriminator . The discriminant result , can be described by the following mapping:
Density Estimator Since the core of this paper is not the design of crowd density predictor, we only adopt a series of simple convolution and up-sample layers to build our crowd density predictor. As shown in Fig. 1 (b), each convolution layer is followed by a up-sample layer. Finally, the source domain density map participating in the training is defined as:
2.2 Loss Functions
In section 2.1, we define the outputs , and . Correspondingly, we define some loss functions to train our model. The training of the proposed framework is to minimize a weighted combination loss function with respect to the parameters of the sub-networks. The final objective function is summed as:
where , denote the weighting parameters of the different mask segmentation loss functions. Since the loss of mask segmentation is designed to assist the network to focus on semantic consistency of the crowd area, its weight should be set carefully. After the experimental testing, we found that it better to set them all at 0.01. is chosen empirically to strike a balance among the model capacity,In the following, we elaborate on each of these loss functions.
Crowd Density Estimation Loss To predict density maps for source-domain images , the density estimation loss given by the typical Euclidean distance based on the source domain ground truths is to supervised train the feature extractor and the crowd density predictor .In symbols, it is defined as follows:
Crowd Semantic Segmentation Loss The is designed as a auxiliary loss to extractor the semantic consistency feature for different domain’s crowd. The goal of the semantic extractor is to learn correspondingly from the input characteristics to predict the crowd mask . To train the network, we first introduce the source domain crowd segmentation loss functions , which is a binary class entropy, defined as:
where is the source domain crowd semantic label.
As mentioned in section 1, we train a model with the CrowdHuman  dataset to generate the semantic label for free. Since the center of this paper is not on how to acquire the semantic label, it would not be elaborated here due to the limited space. The visualization of can be seen in figure 1. It can be observed that the target domain semantic label is not as reliable as the source domain because the pedestrian label is rectangular in object detection. To eliminate the negative effects of such inaccurate segmentation, we only use background labels to promote training, that is, we ignore the white region of the semantic lable . Mathematically, we designed Equ. 8 to filter the prediction mask .
let , we define loss function as follow:
|Method||DA||ShanghaiTech Part A||ShanghaiTech Part B||UCF-QNRF|
|Cycle GAN ||✔||143.3||204.3||19.27||0.379||25.4||39.7||24.60||0.763||257.3||400.6||20.80||0.480|
|SE Cycle GAN ||✔||123.4||193.4||18.61||0.407||19.9||28.3||24.78||0.765||230.4||384.5||21.03||0.660|
Semantic Space Adversarial Loss In the hopes of making the extract consistent features for source domain and target domain, we introduce an adversarial loss following . Specifically, we first train a discriminator to distinguish between the source fearure and target feature by minimizing a supervised domain loss. (i.e. should ideally output to 1 in the source feature and to 0 for that in the target feature ). We then update the to fool the discriminator by inverting its output from 0 to 1, that is, by minimizing
where and are the coordinate dimensions of the output .
Scene Regularization GCC  is a large-scale synthetic dataset that includes a variety of scenarios, weather, and time periods. Therefore, adding the entire dataset to the training will bring negative effects. To eliminate this adverse effect and facilitate fair comparison, we adopted the Scene Regularization  to select the images.
Training Details During the training phase, the goal is to optimize , , and . Due to limited memory, we set the batch size to and perform randomly cropping the images with a size of . and we adopt the Adam algorithm to optimize the network, the initialization learning rate is set to . The training and evaluation are performed on NVIDIA GTX Ti GPU using the framework .
3 Experimental Results
3.1 Evaluation Metrics
, we adopt Mean Absolute Error (MAE) and Mean Squared Error (MSE) as count error evaluation metrics, which are formulated as below:
where is the number of testing images, is the ground truth counting value and is the estimated counting value for the th test image.
We conduct the experiments on the ShanghaiTech PartA/B dataset  and UCF-QNRF dataset  . The ShanghaiTech Part A contains 482 crowd images (300 training and 182 testing images), and the average number of the pedestrian is 501. The ShanghaiTech Part B is with images ( training and testing images),and the average number of people per image is about . The UCF-QNRF is a congested crowd dataset, which consists of images( training and testing images), with the count ranging from to , and the average number of the pedestrian is 815 per image.
3.3 Performance Comparison
In this section, we perform experiments on three typical datasets for across domain crowd counting and then compare the performance of our proposed method with the state-of-the-art SE Cycle GAN . The test results are shown in Table 1. Compared with Cycle GAN  and SE Cycle GAN , which adopt image style transfer, our approach is more practical and yields better results. For Shanghai B, with no domain adaptation, the results of our baseline model are not as good as the SFCN , because we adopt a simpler crowd counter. However, After the domain adaptation, our model achieves comprehensive transcendence in all metrics. Our proposed method reduces the MAE to and the MSE to , which dropped by and compared with the baseline model. In terms of image quality, we also obtain better SSIM and PSNR. For the Shanghai B and UCS-QNRF datasets, both are the congested datasets, our proposed method also performs a better result than the state-of-the-art SE Cycle GAN.
Fig.2 shows the visualization results on the real datasets. It can be observed visually that the Column 4 with domain adaptation is closer to the ground truth than Column 3 without domain adaptation in terms of image quality and crowd counting. This improvement is because our method focuses on the consistency of crowd features, thus reducing the estimation error of background.
In this paper, we aim to count people for real scenarios using synthetic datasets. For the problem of background estimation error in the existing methods, we propose an effective domain adaptation framework, which emphasizes the network to concentrate on the crowd-area semantic consistency of the source domain and target domain by using two adaptation strategies. Experiments on high-density and low-density datasets show that our proposed method achieves state-of-the-art performance. In future work, we will further utilize high-level semantic information and domain transfer to achieve higher precision crowd counting.
-  (2013) From semi-supervised to transfer counting of crowds. In ICCV, pp. 2256–2263. Cited by: §1.
-  (2018) A deeply-recursive convolutional network for crowd counting. In ICASSP, pp. 1942–1946. Cited by: §1.
-  (2019) C. arXiv preprint arXiv:1907.02724. Cited by: §2.3.
-  (DOI: 10.1109/TCSVT.2019.2919139) Pcc net: perspective crowd counting via spatial convolutional network. IEEE Transactions on Circuits and Systems for Video Technology. Cited by: §1.
-  (2019) SCAR: spatial-/channel-wise attention regression networks for crowd counting. Neurocomputing 363, pp. 1–8. Cited by: §1.
-  (2018) CyCADA: cycle-consistent adversarial domain adaptation. In ICML, pp. 1994–2003. Cited by: §1.
-  (2018) Composition loss for counting, density map estimation and localization in dense crowds. In ECCV, pp. 532–546. Cited by: §1, §3.2.
-  (2019-06) Sliced wasserstein discrepancy for unsupervised domain adaptation. In CVPR, Cited by: §1.
Csrnet: dilated convolutional neural networks for understanding the highly congested scenes. In CVPR, pp. 1091–1100. Cited by: §1, §2.2.
-  (2018) Leveraging unlabeled data for crowd counting by learning to rank. In CVPR, pp. 7661–7669. Cited by: §1.
-  (2018) Image to image translation for domain adaptation. In CVPR, pp. 4500–4509. Cited by: §1.
-  (2019-06) Transferrable prototypical networks for unsupervised domain adaptation. In CVPR, Cited by: §1.
-  (2016) Playing for data: ground truth from computer games. In ECCV, pp. 102–118. Cited by: §1.
-  (2016) The synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes. In CVPR, pp. 3234–3243. Cited by: §1.
Almost unsupervised learning for dense crowd counting. In AAAI, Vol. 27. Cited by: §1.
-  (2017) Switching convolutional neural network for crowd counting. In CVPR, pp. 4031–4039. Cited by: §1.
-  (2018) Crowdhuman: a benchmark for detecting human in a crowd. arXiv preprint arXiv:1805.00123. Cited by: §2.2.
-  (2017) Learning from simulated and unsupervised images through adversarial training. In CVPR, pp. 2107–2116. Cited by: §1.
-  (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §2.1.
-  (2018) Learning to adapt structured output space for semantic segmentation. In CVPR, pp. 7472–7481. Cited by: §2.1, §2.2.
-  (2020) Detecting coherent groups in crowd scenes by multiview clustering. T-PAMI 42 (1), pp. 46–58. Cited by: §1.
-  (2019) Learning from synthetic data for crowd counting in the wild. In CVPR, pp. 8198–8207. Cited by: §1, §1, §2.3, Table 1, §3.3.
-  (2019) Object counting in video surveillance using multi-scale density map regression. In ICASSP, pp. 2422–2426. Cited by: §1.
-  (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §3.1.
-  (2019) Adaptive scenario discovery for crowd counting. In ICASSP, pp. 2382–2386. Cited by: §1.
-  (2015) Cross-scene crowd counting via deep convolutional neural networks. In CVPR, pp. 833–841. Cited by: §3.1.
-  (2016) Single-image crowd counting via multi-column convolutional neural network. In CVPR, pp. 589–597. Cited by: §3.1.
-  (2016) Single-image crowd counting via multi-column convolutional neural network. In CVPR, pp. 589–597. Cited by: §1, §3.2.
-  (2017) Pyramid scene parsing network. In CVPR, pp. 2881–2890. Cited by: §2.1.
Unpaired image-to-image translation using cycle-consistent adversarial networks. In CVPR, pp. 2223–2232. Cited by: Table 1, §3.3.