Log In Sign Up

Localizing Firearm Carriers by Identifying Human-Object Pairs

Visual identification of gunmen in a crowd is a challenging problem, that requires resolving the association of a person with an object (firearm). We present a novel approach to address this problem, by defining human-object interaction (and non-interaction) bounding boxes. In a given image, human and firearms are separately detected. Each detected human is paired with each detected firearm, allowing us to create a paired bounding box that contains both object and the human. A network is trained to classify these paired-bounding-boxes into human carrying the identified firearm or not. Extensive experiments were performed to evaluate effectiveness of the algorithm, including exploiting full pose of the human, hand key-points, and their association with the firearm. The knowledge of spatially localized features is key to success of our method by using multi-size proposals with adaptive average pooling. We have also extended a previously firearm detection dataset, by adding more images and tagging in extended dataset the human-firearm pairs (including bounding boxes for firearms and gunmen). The experimental results (78.5 AP_hold) demonstrate effectiveness of the proposed method.


page 3

page 4


Acquisition of Localization Confidence for Accurate Object Detection

Modern CNN-based object detectors rely on bounding box regression and no...

Loss Guided Activation for Action Recognition in Still Images

One significant problem of deep-learning based human action recognition ...

Visual Diver Recognition for Underwater Human-Robot Collaboration

This paper presents an approach for autonomous underwater robots to visu...

PerspectiveNet: 3D Object Detection from a Single RGB Image via Perspective Points

Detecting 3D objects from a single RGB image is intrinsically ambiguous,...

3D Pose from Detections

We present a novel method to infer, in closed-form, a general 3D spatial...

Automatic Hip Fracture Identification and Functional Subclassification with Deep Learning

Purpose: Hip fractures are a common cause of morbidity and mortality. Au...

Unconstrained Fashion Landmark Detection via Hierarchical Recurrent Transformer Networks

Fashion landmarks are functional key points defined on clothes, such as ...

1 Introduction

Gun violence incidents are universal across the globe. These incidents account for many lives around the world every year [16, 15, 2]. A lot of administrative steps have been taken to minimize these incidents but despite all the efforts, the frequency of such events is increasing with time. Due to lack of direction in efficient technology based systems for firearm control on large scale, number of casualties have grown over the years. There is an immense need to come up with a scientific solution to this problem, specially with the huge number of cameras and other imaging systems available today.

Many countries and private security agencies have deployed surveillance systems in public places like schools, colleges, parks, and malls for public safety. These systems require huge workforce to manually look at the footage from the surveillance feed. The downside of operating manually is that it requires lots of human efforts with more chances of mistakes, because of lack of vigilance over long working hours. So, there is a need for a method that should not only be able to detect firearms but it should also be able to categorize the human, who is carrying. Such a novel scientific solution can be embedded in surveillance systems for significant improvement in identifying a potential gun violence incident.

Figure 1: Proposed firearm carrier detection network which not only tells the firearm is carried but also identifies the carrier at image level. Use of Adaptive Average Pooling allow us to train our model using multi-size proposals.

In the recent years, use of visual systems for classification [9, 8, 23] and detection [21, 20, 19, 18, 14, 12, 6, 25, 4]

are of great importance. Deep learning based neural networks

[9, 8, 23]

have performed quite well in key computer vision tasks. These systems learn unique visual features associated with different objects from the given set of images. Many object detection algorithms are designed to perform relatively well on benchmark datasets like PASCAL VOC

[5] and MS-COCO [13], but to apply these algorithms directly on objects like firearms is not much effective. Taking base architecture Faster RCNN [21] for object detection, firearms (gun and rifle) detection has been improved in OAOD [11].

Identification of firearms is difficult task owing to the wide variety of firearms available around the globe. Inherent differences in shape and size make identification of firearms even more challenging. Furthermore, to detect if a human is holding a firearm, is a tougher task because of the clutter, small sized guns, and extremely large but thin rifles making their bounding boxes spread over a larger window which may contain multiple humans. Due to the overlapping of multiple humans in the firearm box, it is very hard to determine which human is actually holding the firearm.

Existing work still lacks to identify that who is carrying firearm? So integrating this module with firearm detection is a novel task addressed in this work. For that we borrow subset of firearms dataset publicly available [11] with additional images. New dataset is manually tagged containing humans and firearms (labels: gun and rifle) bounding boxes with carried and non-carried labels. Along with the dataset, in this paper, a method is proposed which detects whether the firearm is being carried by a specific human. Figure 1 gives a framework of final model that classifies human-firearm pair localization.

We integrate knowledge from different fields to develop a novel solution that can detect objects like firearms from OAOD and classify associations with human in images labeling carried. Incorporating OpenPose [3]

, we use pose estimation to locate hands and build association of firearm being carried. Then as baseline results, we use OAOD for firearms detection and Faster RCNN

[21] (trained on MS-COCO [13] with ResNet-101 [8]) for human detection. Afterwards, enclosed overlap () between detected humans and objects (firearms) is computed and considered true positive if carried (a metric considered in Human Object Interaction (HOI) systems). To improve from baseline results, we train classification model from the association annotated in our dataset of humans and firearms being carried and not-carried. Our model is based on VGG-16 along with adaptive average pooling (AAP) to handle multi-size proposals. Moreover, AAP also allow us to capture context with maximum participation of primary object in an image.

2 Related Work

Object Detection: With advancement of deep learning, object detection research has achieved significant improvement [21, 20, 14, 12, 25, 4]. Generic object detection may be divided into two main categories: one stage object detectors (excel in speed) [20, 19, 18, 14, 6, 25, 4] and two stage object detectors (excel in accuracy) [21, 12, 24]. There is speed-accuracy trade-off [10], suggesting to gain accuracy and speed at the same time is still a challenging task.

Firearm Detection: There are much fewer visual systems specifically dedicated to firearms detection. Javed et al. [11] has developed a weakly supervised Orientation Aware Object (firearm) Detection system (OAOD), which detects guns and rifles in an image, based on the orientation information. Instead of taking oriented Bounding Boxes (BB) during training, axis aligned BB are used. Olmos et al. [17] has applied Faster RCNN on gun detection problem and their system detects only handguns. Akcay et al. has adopted different methods including one and two stage methods for the detection of gun using x-ray baggage security imagery [1]. None of these methods presents a solution to identify firearms carriers.

Pose Estimation: Numerous methods have been proposed to estimate human pose in 2D and 3D vision systems [22, 3, 27]

, but there is no work done that uses human pose estimation to predict whether a person is firearm carrier. OpenPose

[3] uses part affinities between joints and keypoints to estimate poses. Simon et al. proposed hands keypoint estimation using multi-view bootstrapping [22], which estimates keypoints of hands.

Human Object Interaction: The work in Human Object Interaction (HOI) mainly exploits context and attention mechanism to detect human and object interaction [26]. This would not help in more complex tasks required for localizing the actor and interaction recognition. Exploiting different human based approaches to detect human and object such as pose and action information may improve HOI performance [7]. By integrating knowledge from different fields, we propose a novel solution that can detect objects like firearms (gun and rifle) in images along with being carried by respective human. This paper introduces first such method which uses human and hand detection followed by firearm detection to identify firearm carriers in crowded scenes.

3 Baseline Carrier Detection Approaches

In firearm detection research, no work has yet been proposed for the carrier detection. The OAOD method only detects firearms in the images, while the current work we integrate firearm location and human information to classify firearms as being carried or not carried. For firearm classification, consider three different baseline approaches including two based on identifying the human hand and the firearm closeness, and the third one is based on overlap of human bounding boxes and firearm bounding boxes.

3.1 Detecting Hands Inside Firearm Bounding-boxes (HiFB)

In this approach firearm bounding box is detected using OAOD algorithm. Firearm bounding boxes are then input to Multiview Bootstrapping algorithm [22] to identify the hand keypoints. From the predicted keypoints, we identify the set of the keypoints having confidence larger than a selected threshold . If carnality of the set is greater than the two we classify the bounding box as carried. In our case, is set to

. This approach achieves reasonable accuracy for classifying gun as being carried/not-carried, however for the case of rifles, accuracy is low. It is because extremely large bounding boxes of thin and long rifles are not usually hand-centered. These large boxes cannot be used as the probable hand locations as actual hands will cover less pixels. In gun bounding boxes, this condition significantly meets the requirement and hand to box pixels ratio greater as compared to bounding boxes containing rifles.

3.2 Body-pose Conditioned Firearm carried detection (BCFD)

In this approach full human body pose containing body parts is detected using OpenPose [3] and OAOD [11] is used to independently detect firearms in a given image. The detected firearms bounding boxes are inspected against the estimated hand keypoints. The firearm is categorized as carried if a hand keypoints and firearm location overlap exceeds the threshold . The problem with this method is that, full human body is often not visible in most cases due to occlusions or partial appearance. If keypoints of elbow and wrist are not detected then keypoints of hands are also remain undetected. Therefore, this approach suffers performance degradation correlated with the performance of the pose detector.

3.3 Measuring Overlap of Human and Firearms Bounding-boxes (OHFB)

In this approach, human detection is performed by using Faster-RCNN with ResNet-101, pre-trained on MS-COCO. The firearm detection is performed using OAOD algorithm. Intersection of all detected firearms and detected human bounding boxes is performed. Association between firearm and human is established by choosing one with maximum overlap. However, associations with are removed. This approach’s performance suffers in case of crowded scene where firearm BB may have larger overlap with a non-carrier (Fig. 2).

4 Proposed Human Firearm Pair Detection (HFPD)

In addition to the firearm classification as carried or not carried, we aim to identify the carrier of the firearm. The methods (Sec. 3.13.2) where we explicitly try to create association between human pose and firearm, the pairing fails due to error in the pose estimation. A naive idea, considered in 3.3), is to classify firearm and human as paired on the basses of overlap between them. Since, this method does not take in consideration the association of human body with the firearm it fails to achieve good performance, e.g. in crowded scenes. Therefore, we propose to train a neural-network which will learn using training data the necessary features to identify if a particular firearm and human are paired.

Figure 2: Output results by our proposed model. Firearms are shown in Green bounding box, humans are shown in Blue bounding box. Pairwise-extended boxes are ON with respective colors if returned carried by our classification model.

In a given image, human and firearms bounding boxes are separately detected as discussed in the Section 3.2. Each detected human bounding box is paired with each detected firearm, allowing us to create a paired

bounding box that contains both the firearm and the human. Some of these paired bounding boxes contain both firearm and its carrier, in others the pair is not associated. A network is trained to classify these paired-bounding-boxes into interaction or non-interaction. For training, the extended boxes are manually labeled which tell whether the human present in the box is carrying the firearm or not. Features from these multi-size pairwise-extended boxes are used to learn associations of humans and firearms. A classification model is trained on these instances. For the pair classification network, We use VGG-16 (consisting of five VGG blocks of convolutional layers) as feature extraction, followed by Adaptive Average Pooling (AAP) and two fully connected layers. AAP allows us to use multi-size proposals without resizing them. The cross entropy loss function for object classification is defined as:


where is the predicted class probability of being carried and not-carried and is the ground truth class label, is the number of classes, and is the number of samples in batch.

Methods Gun Rifle Overall
HiFD 71.9 37.5 49.2
BCFD 51.3 76.4 66.6
Table 1: Classification of firearm as carried or not carried

5 Experiments and Results

5.1 Dataset and Annotations

As per our knowledge, we are the first to release dataset that contains bounding boxes of firearm and humans, with the association between the firearm and carrier tagged. A subset of images was selected from [11], that contain multiple humans and one or more firearms. 900 more images were added to the dataset, bringing the size to 3128. For each image, firearm and human bounding box are manually annotated, including the pair-bounding box. The pair-bounding box is labelled 1 if it contains valid interaction, 0 if not. All the experiments are evaluated on this dataset. Some samples of images from dataset are shown in Figure 3.

Figure 3: Samples images from the dataset, Firearms are shown in Orange bounding box, humans in Pink bounding box, Red bounding box show association of human and firearm. Negative associations are not shown to avoid clutter.

5.2 Comparison of Different Algorithms

We compare the performance of experiments along with the experimental details. Table 2 shows the experimental results and comparison with the baseline.

5.2.1 Measuring Overlap of Human-Firearms Detection

Experiments were conducted using pre-trained models, for humans (pre-trained Faster RCNN with ResNet-101 and VGG-16 on MS-COCO) and firearms (OAOD) detection. We share the results on both backbones, which indicate ResNet-101 achieve better detection of humans. The model is evaluated, end-to-end. Both, the predicted human and firearms bounding boxes must have IoUs 0.5 to be considered as positive sample for the paired-bounding box. These results are used as baseline, for human-firearm interaction.

Baseline OHFB VGG-16 42.4 62.6 54.64
HFPD w/o
VGG-16 62.4 67.2 64.7
HFPD with
VGG-16 64.8 73.4 69.3
Baseline OHFB ResNet-101 65.0 74.5 70.6
HFPD w/o
ResNet-101 72.3 78.9 76.0
HFPD with
ResNet-101 75.3 81.1 78.5
Table 2: Results of Human-Firearms pair identification with and without Adaptive Average Pooling (AAP) )

5.2.2 Joint Human-firearm Interaction Bounding Boxes

For this experiment, VGG-16 network is fine-tuned (using ImageNet weights) on the annotated bounding boxes of humans-firearms association from the proposed dataset. Note that, we use two fully connected layers of pre-trained VGG network and add final classification layer over it. That restricts the size of image that could be input to the network. Adaptive average pooling (AAP) is used between last convolutional layer and first FC layer to handle multi-size inputs to the network. Without (w/o) AAP, results are reported with fixed resized image (224x224) that simple VGG-16 model allows but aspect ratio disturbs in most of the cases. AAP (output=7x7) is being used to handle multi-size boxes during training and testing as actual VGG-16 does not allow random sized input. AAP also helps to capture prominent details with primary object participation. It is to note that, resizing of multi-size pair-wise extended boxes are done in a way that long dimension corresponds to 600 and shorter side adjusts accordingly. It can be seen in Table


, that using ResNet-101 for human detection yields better proposal with firearms for our model to classify. With AAP, learning rate is 0.00001 with batch size = 1 and dropout (50%) is used while training. We adopt stochastic gradient descent (SGD) to train the model for 20 epochs, and a momentum of 0.9.

6 Conclusion and Future Directions

In this paper we present a novel method for localizing firearm and its carrier in a crowded scene. The problem is not only challenging but its solutions is dearly needed in current law-and-order situation where intelligent cameras are required to perform the surveillance. We exploit human and firearm detection to create paired-bounding boxes for every possible human-object (firearm) pair in the image. Reducing our problem to classifying the pair as valid interaction or not. Employing, adaptive average pooling and multi-size proposals, we were able to achieve 78.5 on the test dataset. Extensive comparative experiments were performed by designing various baseline strategies, including ones exploring human pose and hand-keypoint information. In future, we explore the importance of salient areas of firearms and humans spatially correlated with their respective regions. Existing OAOD dataset [11] was extended to include more challenging images and were hand-tagged to identify human-firearm pairs.


  • [1] S. Akcay, M. E. Kundegorski, C. G. Willcocks, and T. P. Breckon (2018)

    Using deep convolutional neural network architectures for object classification and detection within x-ray baggage security imagery

    IEEE transactions on information forensics and security 13 (9), pp. 2203–2215. Cited by: §2.
  • [2] (2020) Key facts about gun violence worldwide. Note: Cited by: §1.
  • [3] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh (2018) OpenPose: realtime multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008. Cited by: §1, §2, §3.2.
  • [4] K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang, and Q. Tian (2019) Centernet: keypoint triplets for object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6569–6578. Cited by: §1, §2.
  • [5] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman (2010) The pascal visual object classes (voc) challenge. International journal of computer vision 88 (2), pp. 303–338. Cited by: §1.
  • [6] C. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg (2017) DSSD: deconvolutional single shot detector. arXiv preprint arXiv:1701.06659. Cited by: §1, §2.
  • [7] G. Gkioxari, R. Girshick, P. Dollár, and K. He (2018) Detecting and recognizing human-object interactions. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 8359–8367. Cited by: §2.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §1.
  • [9] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §1.
  • [10] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al. (2017) Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7310–7311. Cited by: §2.
  • [11] J. Iqbal, M. A. Munir, A. Mahmood, A. R. Ali, and M. Ali (2019) Orientation aware object detection with application to firearms. arXiv preprint arXiv:1904.10032. Cited by: §1, §1, §2, §3.2, §5.1, §6.
  • [12] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125. Cited by: §1, §2.
  • [13] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §1, §1.
  • [14] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) SSD: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §1, §2.
  • [15] B. News (2020) America’s gun culture in charts. Note: Cited by: §1.
  • [16] (2020) In a deadly start to 2020, new-year shootings. Note: Cited by: §1.
  • [17] R. Olmos, S. Tabik, and F. Herrera (2018) Automatic handgun detection alarm in videos using deep learning. Neurocomputing 275, pp. 66–72. Cited by: §2.
  • [18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1, §2.
  • [19] J. Redmon and A. Farhadi (2017) YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271. Cited by: §1, §2.
  • [20] J. Redmon and A. Farhadi (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767. Cited by: §1, §2.
  • [21] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1, §1, §2.
  • [22] T. Simon, H. Joo, I. Matthews, and Y. Sheikh (2017) Hand keypoint detection in single images using multiview bootstrapping. In CVPR, Cited by: §2, §3.1.
  • [23] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, Cited by: §1.
  • [24] B. Singh and L. S. Davis (2018) An analysis of scale invariance in object detection snip. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3578–3587. Cited by: §2.
  • [25] Z. Tian, C. Shen, H. Chen, and T. He (2019) Fcos: fully convolutional one-stage object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9627–9636. Cited by: §1, §2.
  • [26] T. Wang, R. M. Anwer, M. H. Khan, F. S. Khan, Y. Pang, L. Shao, and J. Laaksonen (2019) Deep contextual attention for human-object interaction detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5694–5702. Cited by: §2.
  • [27] S. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh (2016) Convolutional pose machines. In CVPR, Cited by: §2.