Gun violence incidents are universal across the globe. These incidents account for many lives around the world every year [16, 15, 2]. A lot of administrative steps have been taken to minimize these incidents but despite all the efforts, the frequency of such events is increasing with time. Due to lack of direction in efficient technology based systems for firearm control on large scale, number of casualties have grown over the years. There is an immense need to come up with a scientific solution to this problem, specially with the huge number of cameras and other imaging systems available today.
Many countries and private security agencies have deployed surveillance systems in public places like schools, colleges, parks, and malls for public safety. These systems require huge workforce to manually look at the footage from the surveillance feed. The downside of operating manually is that it requires lots of human efforts with more chances of mistakes, because of lack of vigilance over long working hours. So, there is a need for a method that should not only be able to detect firearms but it should also be able to categorize the human, who is carrying. Such a novel scientific solution can be embedded in surveillance systems for significant improvement in identifying a potential gun violence incident.
have performed quite well in key computer vision tasks. These systems learn unique visual features associated with different objects from the given set of images. Many object detection algorithms are designed to perform relatively well on benchmark datasets like PASCAL VOC and MS-COCO , but to apply these algorithms directly on objects like firearms is not much effective. Taking base architecture Faster RCNN  for object detection, firearms (gun and rifle) detection has been improved in OAOD .
Identification of firearms is difficult task owing to the wide variety of firearms available around the globe. Inherent differences in shape and size make identification of firearms even more challenging. Furthermore, to detect if a human is holding a firearm, is a tougher task because of the clutter, small sized guns, and extremely large but thin rifles making their bounding boxes spread over a larger window which may contain multiple humans. Due to the overlapping of multiple humans in the firearm box, it is very hard to determine which human is actually holding the firearm.
Existing work still lacks to identify that who is carrying firearm? So integrating this module with firearm detection is a novel task addressed in this work. For that we borrow subset of firearms dataset publicly available  with additional images. New dataset is manually tagged containing humans and firearms (labels: gun and rifle) bounding boxes with carried and non-carried labels. Along with the dataset, in this paper, a method is proposed which detects whether the firearm is being carried by a specific human. Figure 1 gives a framework of final model that classifies human-firearm pair localization.
We integrate knowledge from different fields to develop a novel solution that can detect objects like firearms from OAOD and classify associations with human in images labeling carried. Incorporating OpenPose 
, we use pose estimation to locate hands and build association of firearm being carried. Then as baseline results, we use OAOD for firearms detection and Faster RCNN (trained on MS-COCO  with ResNet-101 ) for human detection. Afterwards, enclosed overlap () between detected humans and objects (firearms) is computed and considered true positive if carried (a metric considered in Human Object Interaction (HOI) systems). To improve from baseline results, we train classification model from the association annotated in our dataset of humans and firearms being carried and not-carried. Our model is based on VGG-16 along with adaptive average pooling (AAP) to handle multi-size proposals. Moreover, AAP also allow us to capture context with maximum participation of primary object in an image.
2 Related Work
Object Detection: With advancement of deep learning, object detection research has achieved significant improvement [21, 20, 14, 12, 25, 4]. Generic object detection may be divided into two main categories: one stage object detectors (excel in speed) [20, 19, 18, 14, 6, 25, 4] and two stage object detectors (excel in accuracy) [21, 12, 24]. There is speed-accuracy trade-off , suggesting to gain accuracy and speed at the same time is still a challenging task.
Firearm Detection: There are much fewer visual systems specifically dedicated to firearms detection. Javed et al.  has developed a weakly supervised Orientation Aware Object (firearm) Detection system (OAOD), which detects guns and rifles in an image, based on the orientation information. Instead of taking oriented Bounding Boxes (BB) during training, axis aligned BB are used. Olmos et al.  has applied Faster RCNN on gun detection problem and their system detects only handguns. Akcay et al. has adopted different methods including one and two stage methods for the detection of gun using x-ray baggage security imagery . None of these methods presents a solution to identify firearms carriers.
, but there is no work done that uses human pose estimation to predict whether a person is firearm carrier. OpenPose uses part affinities between joints and keypoints to estimate poses. Simon et al. proposed hands keypoint estimation using multi-view bootstrapping , which estimates keypoints of hands.
Human Object Interaction: The work in Human Object Interaction (HOI) mainly exploits context and attention mechanism to detect human and object interaction . This would not help in more complex tasks required for localizing the actor and interaction recognition. Exploiting different human based approaches to detect human and object such as pose and action information may improve HOI performance . By integrating knowledge from different fields, we propose a novel solution that can detect objects like firearms (gun and rifle) in images along with being carried by respective human. This paper introduces first such method which uses human and hand detection followed by firearm detection to identify firearm carriers in crowded scenes.
3 Baseline Carrier Detection Approaches
In firearm detection research, no work has yet been proposed for the carrier detection. The OAOD method only detects firearms in the images, while the current work we integrate firearm location and human information to classify firearms as being carried or not carried. For firearm classification, consider three different baseline approaches including two based on identifying the human hand and the firearm closeness, and the third one is based on overlap of human bounding boxes and firearm bounding boxes.
3.1 Detecting Hands Inside Firearm Bounding-boxes (HiFB)
In this approach firearm bounding box is detected using OAOD algorithm. Firearm bounding boxes are then input to Multiview Bootstrapping algorithm  to identify the hand keypoints. From the predicted keypoints, we identify the set of the keypoints having confidence larger than a selected threshold . If carnality of the set is greater than the two we classify the bounding box as carried. In our case, is set to
. This approach achieves reasonable accuracy for classifying gun as being carried/not-carried, however for the case of rifles, accuracy is low. It is because extremely large bounding boxes of thin and long rifles are not usually hand-centered. These large boxes cannot be used as the probable hand locations as actual hands will cover less pixels. In gun bounding boxes, this condition significantly meets the requirement and hand to box pixels ratio greater as compared to bounding boxes containing rifles.
3.2 Body-pose Conditioned Firearm carried detection (BCFD)
In this approach full human body pose containing body parts is detected using OpenPose  and OAOD  is used to independently detect firearms in a given image. The detected firearms bounding boxes are inspected against the estimated hand keypoints. The firearm is categorized as carried if a hand keypoints and firearm location overlap exceeds the threshold . The problem with this method is that, full human body is often not visible in most cases due to occlusions or partial appearance. If keypoints of elbow and wrist are not detected then keypoints of hands are also remain undetected. Therefore, this approach suffers performance degradation correlated with the performance of the pose detector.
3.3 Measuring Overlap of Human and Firearms Bounding-boxes (OHFB)
In this approach, human detection is performed by using Faster-RCNN with ResNet-101, pre-trained on MS-COCO. The firearm detection is performed using OAOD algorithm. Intersection of all detected firearms and detected human bounding boxes is performed. Association between firearm and human is established by choosing one with maximum overlap. However, associations with are removed. This approach’s performance suffers in case of crowded scene where firearm BB may have larger overlap with a non-carrier (Fig. 2).
4 Proposed Human Firearm Pair Detection (HFPD)
In addition to the firearm classification as carried or not carried, we aim to identify the carrier of the firearm. The methods (Sec. 3.1, 3.2) where we explicitly try to create association between human pose and firearm, the pairing fails due to error in the pose estimation. A naive idea, considered in 3.3), is to classify firearm and human as paired on the basses of overlap between them. Since, this method does not take in consideration the association of human body with the firearm it fails to achieve good performance, e.g. in crowded scenes. Therefore, we propose to train a neural-network which will learn using training data the necessary features to identify if a particular firearm and human are paired.
In a given image, human and firearms bounding boxes are separately detected as discussed in the Section 3.2. Each detected human bounding box is paired with each detected firearm, allowing us to create a paired
bounding box that contains both the firearm and the human. Some of these paired bounding boxes contain both firearm and its carrier, in others the pair is not associated. A network is trained to classify these paired-bounding-boxes into interaction or non-interaction. For training, the extended boxes are manually labeled which tell whether the human present in the box is carrying the firearm or not. Features from these multi-size pairwise-extended boxes are used to learn associations of humans and firearms. A classification model is trained on these instances. For the pair classification network, We use VGG-16 (consisting of five VGG blocks of convolutional layers) as feature extraction, followed by Adaptive Average Pooling (AAP) and two fully connected layers. AAP allows us to use multi-size proposals without resizing them. The cross entropy loss function for object classification is defined as:
where is the predicted class probability of being carried and not-carried and is the ground truth class label, is the number of classes, and is the number of samples in batch.
5 Experiments and Results
5.1 Dataset and Annotations
As per our knowledge, we are the first to release dataset that contains bounding boxes of firearm and humans, with the association between the firearm and carrier tagged. A subset of images was selected from , that contain multiple humans and one or more firearms. 900 more images were added to the dataset, bringing the size to 3128. For each image, firearm and human bounding box are manually annotated, including the pair-bounding box. The pair-bounding box is labelled 1 if it contains valid interaction, 0 if not. All the experiments are evaluated on this dataset. Some samples of images from dataset are shown in Figure 3.
5.2 Comparison of Different Algorithms
We compare the performance of experiments along with the experimental details. Table 2 shows the experimental results and comparison with the baseline.
5.2.1 Measuring Overlap of Human-Firearms Detection
Experiments were conducted using pre-trained models, for humans (pre-trained Faster RCNN with ResNet-101 and VGG-16 on MS-COCO) and firearms (OAOD) detection. We share the results on both backbones, which indicate ResNet-101 achieve better detection of humans. The model is evaluated, end-to-end. Both, the predicted human and firearms bounding boxes must have IoUs 0.5 to be considered as positive sample for the paired-bounding box. These results are used as baseline, for human-firearm interaction.
5.2.2 Joint Human-firearm Interaction Bounding Boxes
For this experiment, VGG-16 network is fine-tuned (using ImageNet weights) on the annotated bounding boxes of humans-firearms association from the proposed dataset. Note that, we use two fully connected layers of pre-trained VGG network and add final classification layer over it. That restricts the size of image that could be input to the network. Adaptive average pooling (AAP) is used between last convolutional layer and first FC layer to handle multi-size inputs to the network. Without (w/o) AAP, results are reported with fixed resized image (224x224) that simple VGG-16 model allows but aspect ratio disturbs in most of the cases. AAP (output=7x7) is being used to handle multi-size boxes during training and testing as actual VGG-16 does not allow random sized input. AAP also helps to capture prominent details with primary object participation. It is to note that, resizing of multi-size pair-wise extended boxes are done in a way that long dimension corresponds to 600 and shorter side adjusts accordingly. It can be seen in Table2
, that using ResNet-101 for human detection yields better proposal with firearms for our model to classify. With AAP, learning rate is 0.00001 with batch size = 1 and dropout (50%) is used while training. We adopt stochastic gradient descent (SGD) to train the model for 20 epochs, and a momentum of 0.9.
6 Conclusion and Future Directions
In this paper we present a novel method for localizing firearm and its carrier in a crowded scene. The problem is not only challenging but its solutions is dearly needed in current law-and-order situation where intelligent cameras are required to perform the surveillance. We exploit human and firearm detection to create paired-bounding boxes for every possible human-object (firearm) pair in the image. Reducing our problem to classifying the pair as valid interaction or not. Employing, adaptive average pooling and multi-size proposals, we were able to achieve 78.5 on the test dataset. Extensive comparative experiments were performed by designing various baseline strategies, including ones exploring human pose and hand-keypoint information. In future, we explore the importance of salient areas of firearms and humans spatially correlated with their respective regions. Existing OAOD dataset  was extended to include more challenging images and were hand-tagged to identify human-firearm pairs.
Using deep convolutional neural network architectures for object classification and detection within x-ray baggage security imagery. IEEE transactions on information forensics and security 13 (9), pp. 2203–2215. Cited by: §2.
-  (2020) Key facts about gun violence worldwide. Note: https://www.amnesty.org/en/what-we-do/arms-control/gun-violence/ Cited by: §1.
-  (2018) OpenPose: realtime multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008. Cited by: §1, §2, §3.2.
-  (2019) Centernet: keypoint triplets for object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6569–6578. Cited by: §1, §2.
-  (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §1.
-  (2017) DSSD: deconvolutional single shot detector. arXiv preprint arXiv:1701.06659. Cited by: §1, §2.
Detecting and recognizing human-object interactions.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8359–8367. Cited by: §2.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §1.
-  (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §1.
-  (2017) Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7310–7311. Cited by: §2.
-  (2019) Orientation aware object detection with application to firearms. arXiv preprint arXiv:1904.10032. Cited by: §1, §1, §2, §3.2, §5.1, §6.
-  (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125. Cited by: §1, §2.
-  (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §1, §1.
-  (2016) SSD: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §1, §2.
-  (2020) America’s gun culture in charts. Note: https://www.bbc.com/news/world-us-canada-41488081 Cited by: §1.
-  (2020) In a deadly start to 2020, new-year shootings. Note: https://www.nytimes.com/2020/01/01/us/new-year-shootings.html Cited by: §1.
-  (2018) Automatic handgun detection alarm in videos using deep learning. Neurocomputing 275, pp. 66–72. Cited by: §2.
-  (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1, §2.
-  (2017) YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271. Cited by: §1, §2.
-  (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §1, §2.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1, §1, §2.
-  (2017) Hand keypoint detection in single images using multiview bootstrapping. In CVPR, Cited by: §2, §3.1.
-  (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, Cited by: §1.
-  (2018) An analysis of scale invariance in object detection snip. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3578–3587. Cited by: §2.
-  (2019) Fcos: fully convolutional one-stage object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9627–9636. Cited by: §1, §2.
-  (2019) Deep contextual attention for human-object interaction detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5694–5702. Cited by: §2.
-  (2016) Convolutional pose machines. In CVPR, Cited by: §2.