DeepAI
Log In Sign Up

Security Event Recognition for Visual Surveillance

With rapidly increasing deployment of surveillance cameras, the reliable methods for automatically analyzing the surveillance video and recognizing special events are demanded by different practical applications. This paper proposes a novel effective framework for security event analysis in surveillance videos. First, convolutional neural network (CNN) framework is used to detect objects of interest in the given videos. Second, the owners of the objects are recognized and monitored in real-time as well. If anyone moves any object, this person will be verified whether he/she is its owner. If not, this event will be further analyzed and distinguished between two different scenes: moving the object away or stealing it. To validate the proposed approach, a new video dataset consisting of various scenarios is constructed for more complex tasks. For comparison purpose, the experiments are also carried out on the benchmark databases related to the task on abandoned luggage detection. The experimental results show that the proposed approach outperforms the state-of-the-art methods and effective in recognizing complex security events.

READ FULL TEXT VIEW PDF
08/11/2016

Automatic detection of moving objects in video surveillance

This work is in the field of video surveillance including motion detecti...
11/19/2020

Abnormal Event Detection in Urban Surveillance Videos Using GAN and Transfer Learning

Abnormal event detection (AED) in urban surveillance videos has multiple...
02/02/2015

Towards a solid solution of real-time fire and flame detection

Although the object detection and recognition has received growing atten...
01/22/2016

Online Event Recognition from Moving Vessel Trajectories

We present a system for online monitoring of maritime activity over stre...
03/02/2019

Unsupervised Traffic Accident Detection in First-Person Videos

Recognizing abnormal events such as traffic violations and accidents in ...
10/10/2016

EM-Based Mixture Models Applied to Video Event Detection

Surveillance system (SS) development requires hi-tech support to prevail...
09/27/2022

Video-based estimation of pain indicators in dogs

Dog owners are typically capable of recognizing behavioral cues that rev...

I Introduction

Security at public place has always been one of the most important social topics. With rapidly increasing deployment of surveillance cameras in different scenes, tones of video data need to be analyzed in every second. Conventional surveillance systems almost completely rely on security workers to keep watching the surveillance monitors and recognize the suspected person and activities. It’s a low efficient and reliable but costly. Therefore, the reliable methods for automatically analyzing the surveillance videos and reporting special events are demanded by different practical applications, such as security monitoring [1, 2], traffic controlling [3, 4], crime prevention [5] etc. Due to their large market and practical impact, much attention has been drawn in computer vision community for decades [6, 7, 8, 9, 10]

. The task of security event analysis refers to suspicious object detection and anomaly detection in given videos.

Since the object type of category occurring in surveillance scene is unexpected, traditional methods ignore the object type and use foreground/background extraction techniques to identify static foregrounds regions as suspicious object candidates. However, object type provides very important information for video event analysis. For instance, a black luggage is more suspicious than a pink wallet which has been left on the floor in an airport hall. Only detecting static items is insufficient to deeply and correctly analyze such complicated circumstance. The main reason that the previous works only focus on abandoned/left-luggage detection is the imperfect object detector which can only detect limited kinds of object categories with unsatisfied accuracy. In recent years, convolutional neural networks (CNNs) are driving advances in computer vision, such as image classification [11], detection [12, 13, 14, 15], semantic segmentation [16, 17]

, pose estimation

[18, 19]. CNNs have shown remarkable performance in the large-scale visual recognition challenge (ILSVRC2012) [20]

. The success of CNNs is attributed to their ability to learn rich feature representations as opposed to hand-designed features used in traditional image classification methods. Therefore, it is a good choice to use deep learning methods to detect object type in the task of security event recognition.

Fig. 1: Flowchart of our framework.

Our goal in this work is to detect abandoned objects and then analyze the latter events related to them: its owner is taking it, or someone else is moving it to somewhere, or stealing it? These three security events are the most often occurring circumstances in our daily life. In this paper, CNN framework is used for object detection and verification. Because the previous works only focus on left object detection, appropriate benchmark dataset is missing for more complicated tasks. Therefore, we construct a new video event dataset: Security Event Recognition Dataset(SERD) containing various scenarios within real-world environment. We evaluate our method on the benchmark PETS2006. Besides our framework are evaluated on our dataset SERD for further more complicated tasks. Quantitative and qualitative comparisons with ground truth show that the proposed framework is effective for security event detection.

Ii Methodology

Our framework is described by the key components of person and object detection, ownership labeling and security event analysis. An overlook of our framework is illustrated in Fig. 1. In the following subsections, each component is discussed in details.

Ii-a Background Model

Static is the most obvious character of abandoned objects. Thus, our framework apply dual-background model to detect static regions as candidates of abandonment. The background is divided into long-term which is used for detecting static foreground objects, and short-term one for moving objects. Long-term background model at time point is denoted as and the short-term one is . We denote as binary foreground image obtained via , and via .

The background model proposed in [21] is utilized in our framework because of its high effectiveness and efficiency. In our application, 20 frames of each 50th frame are sampled for updating long-term background model and each 3th frame for short-term background model. With frame rate of 25Hz, the long-term background completely updates in each 40 seconds and the short-term background updates each 2 seconds.

Ii-B Person and Object Detection

In recent years, deep learning based algorithms have shown great power in object detection and classification tasks [20, 22, 13]. Considering the trade-off between ”real-time” capability and accuracy, the faster region proposal convolution neural network (FrRCNN) [13] is applied in our framework.

According to the ownership relationship, all objects of interest are divided into background objects and foreground objects. First, the FrRCNN is used to detect objects from the learned initial long-term background RGB image. These detected objects are registered in , which indicates that these objects belong to the background.

As mentioned before, objects abandonment and security events relate to static objects. There are three states for object: If a object presents in the long-term foreground but not in the short-term foreground, it is static. If it presents in both foreground masks, it is moving. If an object has ever presented in the foregrounds but disappears from both of the foregrounds later, it means that it is in static for a very long time.

Therefore, an operation is conducted between and to get the static foreground regions as candidates of static objects. Then, FrRCNN is applied to detect objects of interest within those static regions. The FrRCNN is only applied to detect objects within the foreground regions instead of the whole image to reduce computation, which is important for real-time application. All the objects detected in this step are static objects and registered in a list of , where encodes the information of a single object category, bounding box, and its features which will be discussed in Sec. II-D. We utilize the same FrRCNN which is mentioned above to detect Persons on each RGB frame and denote them as . includes the extracted features of the detected person. Subsequently, the real-time tracking algorithm proposed by Bewley et al. [23] is utilized in our framework for tracking. The tracing information of each person is denoted as .

Ii-C Ownership Labeling and Abandoning Detection

The owner of an objects is one of the most important information to make sure whether an object is abandoned or just left provisionally. It is also the crucial cue to analyze the security events, such as theft. Thus, to identify the owner, we compute the average distance between and each person’s trace over time. The person with smallest distance to is labeled as the owner and denoted as . Because the shot-term background is updated in each 2 seconds in our work, only the section of each trace from to is considered, whereas is the time point of being detected.

Fig. 2:

A deep CNN can be divided into two parts: the first part is used to extract deep features and dubbed as feature network; the second part is used to finish specific task, such as object detection, and dubbed as task network. Different tasks can used the same feature networks to save training cost, in particularly the tasks are similar to each other.

(a) People detection (b) Object and owner detection (c) New people detection (d) Owner is leaving (e) Bag is taken by owner (f) Person switches the bag (g) Alarm is triggered (h) Stealing detection

Fig. 3: An example of experimental results on SERD. A man comes into the room (a). Then he left his bag on the table and begins to work(b). He is labeled as the owner but the bag is not labeled as abandoned object. Another man left his before the white board (c) and left the scene (d). He is labeled as the owner and the bag is labeled as abandoned object. The owner takes his bag away without alarm (e). He switches the bag by his (f), and a warning is issued that the bag does not belongs to him and he is labeled as the owner of the substitute objects (g). He is recognized as a theft when he is leaving (h).

It is costly and unnecessary to watch over all objects appearing in the surveillance scene. Security events in public scenes relate to abandoned objects normally. Therefore, abandonment should be detected reliably. The basic rule for abandoned object detection are originally defined by PETS2006: From temporal aspect, if an object is unattended move his bag in seconds, the bag is declared as an abandonment; From the spatial aspect, an object is defines as abandonment if there is not owner within 3 meters. However, in practice the owner may stay in the scene for a very long time without touching his object. For instance, in the public rest area of a library, a student who wants a break put his bag on a table and then go to a vending machine for a while. This case satisfies the rules for abandonment, but the bag is not abandoned. Besides, the spatial rule requires high quality calibration of cameras. Therefor, the rules for abandonment detection are modified to fit the practice application better as follows:

  • is tracked going out of the surveillance scene, i.e. its trace is extending to the edge area of given scene.

  • If ’s trace does not reach the edge area but it disperses from the scene longer than consecutive seconds and is still there, then is labeled as abandoned object.

Ii-D Security Event Analysis

Three kinds of security events are considered in our work: object abandonment, moved by un-owner and stolen. They have covered most of the security events in our real lives. The key point to recognize these security events is the verification of person and object. Person verification is used to judge whether the people who is doing something to the under-watch object is the owner. Object verification is to make sure that the object is moved away, or whether it present in the scene again. Here, we use a similar approach proposed by Xiao et al. [24] for person and object verification. The difference is that we don’t need to train a so complex CNN structure specific for person re-identification.

To reduce unnecessary computation, only the objects which have been moved and the persons who are involved in the events are verified. When any object is being moved, the region indicated by its bounding box will be shown in the short-term foreground image . Therefore, the object whose bounding box involves foreground over a threshold of its area is counted as a possible moving object. Then this region is cropped out from the RGB frame and is input to the FrRCNN for object detection.

If the category of the newly detected object changes, or its bounding box varies too much (over a threshold), object or is denoted as being moved/missed object . And the person who is now closest to it is labeled as the candidate for this event. Next, needs to be verified whether it is the owner of . If is registered in , is labeled as suspect because the background objects belong to the surveillance scene. If is from , a progress of people Re-id is carried out as follows.

Owner verification

The pose and view angle of a person influence the verification results crucially. For example, two pictures which are captured from a man front and rear respectively are easily identified as two different persons. To enhance the Re-id accuracy, 20 samples are taken for each person as follows. When a person is labeled as the owner or candidate , 20 frames are picked out from his first appearance till present in uniformly time interval, and 20 samples of them are cropped out from them respectively. In this way, the appearance information of this person can be captured as different as possible. Each sample from is compared with each one from using the CNN framework [24]. Then a confused matrix is obtained to interpret the similarity of this two sets of samples. denotes the similarity between -th sample of and -th sample of . The similarity score is formally calculated as: If is greater than a threshold, and are considered as the same person. , and are canceled from the their lists respectively, because it is not necessary to pay attention on any more. Otherwise, keeps the label as candidate for further watch.

In the later video frames, each newly detected object is compared with each : and are cropped out from the their corresponding RGB images respectively and put into the CNN framework [24] to verify if is . If yes, is recognized as moving the object to a new place. When disperses from the surveillance scene, or it reaches a predefined regions, such as exist, is not detected again. Then this event is recognized as stealing and is the theft.

[25] [9] [26] [10] ours
Precision 0.75 0.95 0.85 1.0 1.0
Recall 1.0 0.8 0.8 1.0 1.0
TABLE I: Comparison of different methods on PETS2006 video dataset.
Event Abandoning Moved by owner Moved by un-owner Theft
Scene GT TP FP GT TP FP GT TP FP GT TP FP
Lab1-v1 1 1 0 1 1 0 1 1 0 1 1 0
Lab1-v2 2 2 0 1 1 0 0 0 0 1 1 0
Library 1 1 2 0 0 1 1 1 1 1 1 0
Lab2-v1 1 1 0 1 1 0 0 0 0 0 0 0
Lab2-v2 1 1 2 2 1 0 0 0 1 0 0 1
Lab2-v3 2 2 0 2 1 1 0 0 1 2 1 0
Lab2-v4 2 1 2 2 0 1 2 0 0 0 0 0
Hall-v1 0 0 0 0 0 0 1 1 0 1 1 0
Hall-v2 1 1 1 0 0 0 0 0 0 1 0 0
Hall-v3 2 2 0 1 0 0 0 0 0 0 0 0
Hall-v4 1 1 1 0 0 1 1 0 0 1 0 0
Sum 14 13 8 10 5 4 6 3 3 8 5 1
Precision 61.9% 55.6% 50% 83.3%
Recall 92.8% 50% 50% 62.5%
TABLE II: Experimental results on the video dataset SERD.

Ii-E CNN training

Our neural network structure is shown as Fig. 2

. First, we use the ImageNet 

[20] pretrained CNN model for object detection. Then the network branch for object detection is frozen and the feature network and the branch for re-identification are trained on the benchmark dataset for person re-id CUHK[24] jointly. The whole networks for object detection and person re-id are trained in an alternate and iterative step: each time the feature and the task networks are trained together while the other task network is frozen. Finally, each of the task networks are fine tuned with some examples cropped from the datasets which are use in the experiments respectively.

Iii Experiments

In this section, the performance of proposed framework is evaluated for security event recognition. In addition, the experimental results of abandoned luggage detection will also be compared with the-state-of-the-art methods.

The experiments are carried out on two datasets to evaluate the performance of our framework for detecting security events: abandoned object detection, recognition of objects being moved by owner or non-onwer, or stolen. The PETS2006 is a benchmark dataset and consists of seven sequences of various scenarios. Beside the third one, each of the others includes an abandoning event. The Security Event Recognition video dataset (SERD) is collected from four different public scene of a campus. It’s constructed especially for evaluation of the proposed framework for security event recognition. It contains eleven video sequences. Each of them contains different security scenarios, such as object abandonment, theft, etc.

Our method is evaluated for detecting abandoned object on the benchmark dataset PETS2006. The experimental results are compared with the ones given by the state-of-the-art methods [25, 9, 26, 10]. From the comparison in Tab. I we can see that, our results are same as the one form [10], but outperforms the others. Furthermore, our method labels the owner of each abandoned object correctly.

We further validate the proposed method on our SERD dataset. Fig. 3 illustrate the whole process of a series of events about an abandoned object. Person A comes into the lab and put his bag on the table, and then he sits there for a long time. He is labeled as the owner of the bag, and the bag is not recognized as an abandoned object(Fig. 3(a) and (b)). Person B put his bag on the oscilloscope and goes out of the camera view. He is labeled as the owner of his bag and the bag is recognized as an abandoned object when he is going out of the scene. Subsequently, A takes his bag away, which is recognized as allowable. A exchanges his bag with the bag of B, which causes an alarm. Meanwhile, A is still labeled as the owner of his own bag. When he is detected going out of the room, an alarm of stealing is triggered. Each event is correctly recognized and no false alarm is triggered by our method in this video.

A summary of the experimental results is listed in Tab. II. We can see that, the proposed work has similar performance on detecting events of ”moved by owner” and ”moved by un-owner”. It means that, our method doesn’t work very well by owner labeling. After analyzing the video and the experimental results we find that, the algorithm for object detection cannot provide satisfied performance: sometimes it detects objects which don’t exist and cannot detect the objects of interest precisely. A better object detection methods would boost the our framework’s performance. For abandoned object detection and theft recognition, our framework provides good results.

Iv Conclusion

In this work, we propose a novel framework for security event recognition in surveillance videos which includes abandoned object detection and special event analysis. It is a significant extended application of state-of-the-art works which only focus on abandoned luggage detection. Different from previous works, our approach uses object detector, which benefits from the power of deep learning in visual tasks, instead of using foreground/background extraction for static item detection. The proposed approach outperforms the state-of-the-art methods for abandoned luggage detection. The effectiveness of our approach for more complex security event recognition has also been verified in various scenarios. Furthermore, a new video event dataset SERD is constructed especially for the task of security event detection. SERD is collected from four different public scenes in a university campus, which contains eleven video sequences. Each of them contains different security scenarios, such as object abandonment and theft.

References

  • [1] R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt et al., “A system for video surveillance and monitoring,” Technical Report CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University, Tech. Rep., 2000.
  • [2] W. Liao, B. Rosenhahn, and M. Y. Yang, “Gaussian process for activity modeling and anomaly detection,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS Geospatial Week, 2015, pp. 467–474.
  • [3] X. Wang, X. Ma, and W. E. L. Grimson, “Unsupervised activity perception in crowded and complicated scenes using hierarchical bayesian models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 3, pp. 539–555, 2009.
  • [4] W. Liao, B. Rosenhahn, and M. Y. Yang, “Video event recognition by combining HDP and gaussian process,” in International Conference on Computer Vision Workshop, 2015, pp. 166–174.
  • [5] E. L. Piza, “The crime prevention effect of cctv in public places: a propensity score analysis,” Journal of Crime and Justice, pp. 1–17, 2016.
  • [6] Q. Fan and S. Pankanti, “Modeling of temporarily static objects for robust abandoned object detection in urban surveillance,” in Advanced Video and Signal-Based Surveillance, 2011, pp. 36–41.
  • [7] H.-H. Liao, J.-Y. Chang, and L.-G. Chen, “A localized approach to abandoned luggage detection with foreground-mask sampling,” in Advanced Video and Signal Based Surveillance, 2008, pp. 132–139.
  • [8] R. H. Evangelio, T. Senst, and T. Sikora, “Detection of static objects for the task of video surveillance,” in IEEE Winter Conference on Applications of Computer Vision, 2011, pp. 534–540.
  • [9] Q. Fan, P. Gabbur, and S. Pankanti, “Relative attributes for large-scale abandoned object detection,” in International Conference on Computer Vision (ICCV), 2013, pp. 2736–2743.
  • [10] K. Lin, S.-C. Chen, C.-S. Chen, D.-T. Lin, and Y.-P. Hung, “Abandoned object detection via temporal consistency modeling and back-tracing verification for visual surveillance,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 7, pp. 1359–1370, 2015.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
  • [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2014.
  • [13] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
  • [14] L. Liu, W. Lin, L. Wu, Y. Yu, and M. Y. Yang, “Unsupervised deep domain adaptation for pedestrian detection,” in European Conference on Computer Vision Workshop on Crowd Understanding, 2016, pp. 676–691.
  • [15] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 1, pp. 142–158, 2016.
  • [16] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440.
  • [17] S. K. Mustikovela, M. Y. Yang, and C. Rother, “Can ground truth label propagation from video help semantic segmentation?” in European Conference on Computer Vision Workshop on Video Segmentation, 2016, pp. 804–820.
  • [18] A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1653–1660.
  • [19] A. Krull, E. Brachmann, F. Michel, M. Y. Yang, S. Gumhold, and C. Rother, “Learning analysis-by-synthesis for 6d pose estimation in RGB-D images,” in International Conference on Computer Vision, 2015, pp. 954–962.
  • [20] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015.
  • [21] D. M. Russell and S. Gong, “Minimum cuts of a time-varying background,” in BMVC, vol. 6.   Citeseer, 2006, pp. 809–818.
  • [22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
  • [23] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” arXiv preprint arXiv:1602.00763, 2016.
  • [24] T. Xiao, H. Li, W. Ouyang, and X. Wang, “Learning deep feature representations with domain guided dropout for person re-identification,” arXiv preprint arXiv:1604.07528, 2016.
  • [25] L. Li, R. Luo, R. Ma, W. Huang, and K. Leman, “Evaluation of an ivs system for abandoned object detection on pets 2006 datasets,” in Proc. IEEE Workshop PETS, 2006, pp. 91–98.
  • [26] Y. Tian, R. S. Feris, H. Liu, A. Hampapur, and M.-T. Sun, “Robust detection of abandoned and removed objects in complex surveillance videos,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 41, no. 5, pp. 565–576, 2011.