1 Introduction
Bodyworn video (BWV) cameras are becoming increasingly popular tools for police departments [20]. They are used to provide a record of policepublic interactions, and have been shown to increase accountability among officers [13]. Furthermore, BWV has recently become a topic of widespread interest among the general public, especially given the recent controversies regarding policepublic relations and policy. To produce this video, police officers wear specially designed cameras on their chests to record their interactions with the public. However, largescale BWV deployment produces terabytes of data per week, far too much for complete review by humans. This necessitates the development of effective computational methods to identify salient changes in video between various states — such as in or out of a building, interacting or not interacting with the public, and in and out of a car.
In early architectures in the literature, changes in videos are detected using a variety of statistical and image processing techniques based on computing differences in image feature representations [4]. Other methods extend these basic spatiotemporal models in interesting ways to produce videospecific changepoint detection algorithms. For example, in [24]
, the authors introduce a Bayesian method to segment videos containing specific scenes into clusters in an online, unsupervised way accompanied by confidence probabilities. The method is applied to robotics. In
[34], the authors extend a statistical changepoint detection algorithm to video in order to track 3D objects. In recent deep learning literature, the authors in
[12]propose a convolutional network with a sliding frame window input capable of creating spatiotemporal features to classify videos.
The changepoint detection literature informs part of our approach as well. Classic statistical methods range from simple sum and meanbased thresholding algorithms for single changepoint detection in offline data [5], to nonparametric tests for changes in distributions [40]. Other statistical methods use Bayesian priors to incorporate timedependent information into the probability of a changepoint occurring [2].
In this paper, we present a novel twostage framework (summarized in Figure 1) for video changepoint detection which draws on methods from machine learning, computer vision, and changepoint detection. We begin with a video data set with ground truths — the time at which changes between the two predefined states occur. These states are mutually exclusive and collectively exhaustive, and we refer to them as positive and negative states. Then, we selectively extract frames from each video to create a time series of frames. In the first stage, we utilize feature extraction and image representation methods to generate a compact representation of each frame and then label each representation via classifiers — support vector machine (SVM) and convolutional neural network (CNN) — and we ultimately construct a time series of scores. These scores measure the confidence of a classifier about whether a video frame corresponds to the positive state. In addition, by setting a threshold, we are able to convert these scores into binary labels (0,1) corresponding to positive and negative states. Finally, changepoint detection algorithms analyze the scores or labels to identify salient changes between the two states of interest, thereby locating the times at which changepoints occur. This modular format enables generalization to a variety of changepoint classes.
The paper is organized as follows. Section 2 presents construction of video representation and classification approaches, turning to the computer vision literature using feature detection methods, SVMs, and CNNs Changepoint detection methods are presented in Section 3, utilizing mean squarederror minimization, forecasting methods, hidden Markov models, and maximum likelihood estimation. Finally, we perform an experiment on a bodyworn video data set provided by the Los Angeles Police Department (LAPD). We parameterized our framework to detect changes from incar scenes to outofcar scenes, and we achieve promising results, which are presented in Sections 4.3 and 4.4.
2 Video Preprocessing and Frame Classification
A video can be regarded as a sequence of frames. In a preprocessing step, we sample frames from videos and save them as JPEG images. The goal is to classify these frames as either one of the two states — the states between which we wish to identify changepoints. We frame this problem as one of scene classification. Scene classification has been extensively studied by the computer vision community; consequently we use methods from computer vision to classify scenes. Current state of the art approaches use either keypoint detection and the BagofVisualWords (BoVW) technique with a classifier (such as an SVM) capable of comparing the histograms it produces [39], or a convolutional neural network (CNN) on raw pixel values [15]. To extend the SVM, we propose a novel technique for soft histogramming, which improves classification accuracy. We also modify the architecture of a pretrained CNN to create a CNN capable of twostate video frame classification. The details are described in the following sections.
2.1 Keypoint Detection and Support Vector Machine
Intuitively, keypoints are distinctive image features. After a keypoint is located by a keypoint detector, image features in the keypoint’s neighborhood can be described by a keypoint descriptor. We use scaleinvariant feature transform (SIFT) [18] for keypoint detection and description, because SIFT features are shown to be invariant to image scale and rotation, and it is partially invariant to changes in illumination. The major steps of constructing SIFT can be summarized as:

Keypoint detection and localization

Apply Gaussian filters with different standard deviations to the input frame

In the differences of Gaussians, search for local extrema in scale and space. These extrema are potential keypoints.


Orientation assignment

Assign one or more orientations to each keypoint based on the directions of pixel gradients in the keypoint’s neighborhood


Keypoint description

Compute gradients of pixels relative to the keypoint orientation in the 16by16 neighborhood around each keypoint

Divide this 16by16 patch into blocks of 4by4 in size and create an 8bin orientation histogram for each block

Concatenate histograms to get a 128 dimensional descriptor for each keypoint

Using this approach, each frame can be represented as a SIFT matrix, with each row being a 128dimenional SIFT descriptor. However, since the number of SIFT descriptors extracted varies among frames and an SVM requires inputs to have the same dimension, we use BoVW as an additional step to construct image representations.
BagofVisualWords and Vector Quantization
After extracting SIFT features from all video frames of interest, we took 20% of frames of the two states in the training set and applied kmeans clustering separately on their feature vectors, with each state having clusters. After centroids of clusters are computed, we assign each feature vector from frames in testing set and the remaining part of the training set to its closet centroids based on Euclidean distance. This general technique is called BoVW, and the number of clusters is often referred to as the size of vocabulary. BoVW is an example of vector quantization (VQ) in computer vision. After VQ, a feature vector is represented by the indices of clusters to which it is assigned [14]. In general, there are two distinct forms of VQ: hard VQ and soft VQ. In hard VQ, a feature vector is assigned to exactly one cluster, which corresponds to the closest centroid; whereas in soft VQ, a feature vector can be assigned to more than one clusters [36]. In our work, a feature vector’s membership at each cluster depends on the feature vector’s distance to the corresponding centroid. We propose the following technique to perform soft VQ.
Let be a set of centroids computed in the clustering stage, where is the total number of centroids, and let denote the set of all SIFT feature vectors extracted from a frame. The goal is to construct , where measures the effective number of feature vectors assigned to cluster for . For each feature vector , we compute its Euclidean distance to each of the centroids . Then, the relative distance between centroid and feature vector can be defined as
where the centroid closest to the has relative distance whereas the farthest centroid gets relative distance . To control the contribution of to clusters whose corresponding centroids are not the closest to , a parameter is introduced. We then define the exponentially decayed relative distance as so that we essentially recover hard VQ as approaches positive infinity. The contribution of to is then normalized to 1, and so every feature vector has the same weight. This procedure is summarized in Algorithm 1 below.
Note that the idea of VQ is closely related to histogramming: assigning a vector to clusters essentially achieves the same effect as incrementing the counts at the corresponding histogram bins, except that in the later case bin counts are discrete. For notational convenience, we refer to soft VQ and soft histogramming interchangeably, and is called “BoVW histogram” in subsequent sections.
Note that conventional BoVW and VQ do not consider spatial information of keypoints. In other words, the locations of objects within images are not taken into consideration. Previous literature [39] suggests that for a small size of visual vocabulary, including spatial information can improve classifiers’ performances significantly, while for a large size of visual vocabulary, the improvements are not substantial.
Support Vector Machine with Pyramid Match Kernel
To evaluate the effect of including spatial information of keypoints, we experiment with pyramid match kernel [16], which partitions an input image into increasingly fine spatial bins. At level , cells are placed on each side of an image, so there are spatial bins with equal size. No partition occurs at level . By setting parameter , which is the maximum number of levels, we are able to control how much detailed spatial information are included. For example, if is set to 0, no spatial information of keypoints will be considered.
After VQ, we can represent a feature vector by a histogram of cluster membership, where each histogram bin measures the similarity between the feature vector and the corresponding centroid. For each spatial bin, we create an aggregated histogram by summing up histograms corresponding to feature vectors falling in this bin. These aggregated histograms are then weighted according to the level at which the spatial bin is located. Because matches of features at finer spatial resolutions are expected to yield more information about the similarity between two images, histograms at finer grids are weighted more heavily. We follow the practice in [16] and give weights 1/4, 1/4, and 1/2 to levels 0, 1, and 2 respectively. In the final step, we concatenate these weighted aggregated histograms from all spatial bins, and an input image is represented by a vector of a fixed length. This vector is later input into SVM.
A twoclass SVM works by finding the optimal hyperplane that gives the maximum separation between training examples from the two classes. This goal can be achieved by solving an optimization problem, which maximizes the twoclass separation while penalizing training examples lying on wrong sides of margins. We apply a kernel SVM, which first maps training examples to a higher dimensional feature space before optimizing for the maximum separation. A kernel function takes two training examples and measures their similarity. In our project, the kernel function corresponds to the pyramid match kernel
[16], which sums up the intersections between two histogram created using the method described above.2.2 Convolutional Neural Network
In the second classification approach, we use deep neural networks. Deep neural networks are machine learning algorithms that jointly learn a feature representation and discriminative classifier over a data set [9]. Nonlinear computational nodes called neurons
are stacked on top of one another in layers to form complex, richly informative sets of features that have highly discriminative characteristics. Neural networks are trained by changing weights, thresholds, and other parameters, generally through the use of an iterative optimization algorithm like stochastic gradient descent
[9, 26]. An overview of the historical development of neural networks and deep learning can be found in [28].Convolutional neural networks (sometimes referred to as “ConvNets”, “convolutional networks”, or “CNNs”) have their origins in the study of the visual cortex in primates [10]. ConvNets were popularized in [17], although earlier forerunners such as [8] contributed to their development. ConvNets extract information from input data using overlapping convolutions. Each convolution operation consists of ”sliding” a feature detector over input data, which generates an output of similar dimensionality. Each feature detector looks for one specific feature, and is made up of a number of trainable weights. Features are dependent on data; for example, image features may include edges, color blobs, or simple shapes. Multiple convolutions are performed in a single convolutional layer
, and the output of a convolutional layer is transformed by nonlinear activation functions. Convolutional layers are stacked and interspersed with
pooling layers, which subsample their input (e.g. by taking averages over various input sections) and produce an output with lower dimension.A convolutional layer is made up of several different feature maps, which take the form of tensors, in the sense that they are (small) multidimensional arrays of numbers. The feature maps require this definition because the twodimensional input image requires the first convolutional layer to contain twodimensional feature maps. Because there are multiple feature maps in the first layer, the output matrices are concatenated to form output tensors—threedimensional arrays that are convolved with the feature maps in the next convolutional layers. This structure means that convolutional networks pass tensors in between their intermediate layers.
At the last pooling or convolutional layer, the produced activation tensor is generally flattened into a single vector, and connected to a fully connected layer. This fully connected layer is followed by one or two more fully connected layers, and an output layer. In the case of binary classification, we measure the error of the output layer activation (or “score”) using the hinge loss function. Hinge loss is defined as , where is the true label and
is the predicted score. The combination of convolutional layers, which create highquality, deep feature representations of images, and fully connected neural networks, which are excellent classifiers, make convolutional neural networks very effective at most computer vision tasks.
2.3 Pretrained Networks
Unfortunately, deep neural networks (especially convolutional networks) have one negative feature: they take large amounts of time and computational power to train. One way to bypass this problem is to reuse a popular, wellknown network configuration for which trained weights already exist. These pretrained neural networks are created by neural network researchers, and consist of a known architecture (e.g. a number of convolutional layers, specific sizes of feature maps, etc.), and a file containing the numbers for each weight in the network.
Pretrained networks have been released for ILSVRC, a competition in which participants aim to classify images into one of 1,000 classes. It has been found that the weights and structure of these networks provide excellent starting points for classifying images into a different set of classes. For example, in [22], it is reported that simply removing the output layer of a popular pretrained ConvNet and replacing it with a new output layer trained to detect a different set of classes provides state of the art accuracy on several computer vision problems.
VGG16 Architecture
The pretrained VGG16 network of [29] was initially conceived and publicly released in 2014. It was a topperforming model in the 2014 ILSVRC, and has been used widely in the literature to achieve excellent results on image classification problems. The original training process of the VGG16 network can be found in [29]. The main reason we use the VGG16 convolutional network is for its very deep architecture, which is what allowed it to perform so well in the 2014 ILSVRC competition.
Adapting VGG16
Although VGG16 ends with a 1,000dimensional output layer, this layer can be removed and replaced with a layer made for a binary classification task such as detecting whether a scene is classified into one state or another. To facilitate this, the weights were downloaded from the authors’ website, and the network was implemented using machine learning software libraries. Because the weights of the last fully connected layer are often tuned to the task of the next (output) layer [22]
, we removed this layer as well as the output layer. We replaced the two layers with a single output layer that uses the hinge loss function. This new layer, once the weights are trained, produces a univariate scalar score for each frame that conveys the positive/negative state label for the frame. The use of a hinge loss function is reported in
[32] to produce excellent results on different classification problems. In addition to using hinge loss, we regularize the weights of the output layer using elastic net regularization, which is defined as adding a penalty term to the loss function , such that the function minimized becomes , where is the set of weights, is the norm, is a penalty importance parameter, and .To train this network, we first froze the weights of the nonmodified layers so that they would not be changed. Training then proceeded using minibatch stochastic gradient descent. Detailed information on training and results will be discussed in Section 4.
3 ChangePoint Detection
To recap our framework, we sample every th frame of a video and apply our classifier to each frame to distinguish between our states. This gives us a series of confidence scores. We then seek to find changepoints in this series — points at which the frames switch between our two states of interest. Due to the modular nature of our framework, we were able to explore several approaches to the problem of changepoint detection.
3.1 ChangePoint Methods Overview
Given a time series , we define a changepoint as a place in the series where the underlying distribution of the changes. That is, in the case of one changepoint:
for some distributions . In the context of this problem, the distributions are over all frames representing our two states of interest. There may be zero, one, or multiple changepoints in a given series of scores.
In this section, we discuss a variety of approaches to changepoint detection. They are:

Mean squarederror minimization, for which we derive a distribution for its central statistic

Forecasting methods, which can be easily adapted to online changepoint detection

Hidden Markov models and maximum likelihood estimation, which provide state labels for each frame
3.2 Mean Squared Error minimization (MSE)
We will first outline how this method (inspired by [33]) works for sequences with one changepoint, then show how we extend it to address the possibility of multiple changepoints.
In a single changepoint sequence, we attempt to optimally describe the sequence by using two constant functions. That is, we find the optimal point to split the sequence such that the the two halves of the sequence cluster closely around their sample means. Formally:
where
We are finding the total squared error from the two sample means. In the univariate case, we can reduce our expression to the following:
We now wish to create a hypothesis test to determine if a measurement of MSE at a given is significantly small enough to represent a changepoint. Let be that
does not have a changepoint. We can reject this null hypothesis, and declare
to be true (i.e., a changepoint exists), if the value for is below some significance threshold . The value calculations are given below.Under , we assume all are independently and identically distributed according to some distribution
. By the central limit theorem, we can take sample means
andto be normally distributed for large enough sample size, i.e.
, where andare the mean and variance of the
, respectively. Without loss of generality, let us assume . Then, the squared normal variableis from the gamma distribution
, and also is from . So by the properties of the gamma distribution, we have and . Therefore, . Let us call this variable . We then haveSince is a constant for each sequence, we can now calculate a value for
, using the cumulative distribution function (CDF) for
. The CDF is as follows:where is the gamma function, and is the lower incomplete gamma function. Therefore, . Now, we can reject this null hypothesis if , for significance level . When testing every point in a sequence, we can use a Bonferroni correction, with a new significance level of .
To find a single changepoint, we find for all , and pick the one with the lowest pvalue. If that pvalue is below our threshold, that is the changepoint; otherwise, we declare the sequence to be changepoint free.
We can then recursively extend this method to find multiple changepoints in a sequence, if they exist. We first find a single changepoint — as described above. If that changepoint is deemed significant, we recursively test the intervals on each side of the changepoint for another changepoint. When an interval is deemed to not have a significant changepoint, the algorithm stops.
3.3 Forecasting Methods
Forecasting methods allow us to fit a model to a set of data and then predict future observations using this model. To take advantage of the power of forecasting in the changepoint detection setting, we develop what we will call the “future window technique” (inspired by [31]) and combine it with univariate and multivariate modeling methods to detect changepoints in the time series of frames.
To employ the future window technique, we establish an initial model (or “baseline model”) based on a set number of observations in the beginning of a time series — assuming that a change will not occur within the first few observations of the time series. To find potential changes, we use the baseline model to predict the next observation in the series and compare this prediction against a set number of future observations — call this the “future window.” Comparing this prediction to multiple observations in the future allows us to see if the series deviates from the established model for a significant amount of time, which reduces instances of false positives created by outliers. The number of observations in the window can be changed depending upon the desire of the user to either minimize false positives or false negatives. If the differences between the prediction and each observed value in the window are all greater than some threshold — whose determination differs from method to method — then we call the value at the beginning of this window of observations a changepoint. If a changepoint is established, we reestimate the baseline model using the observations in the future window
[31].The process outlined above repeats for every point in the time series, except for the last few where it would have been impossible to take a full future window into account. This methodology of estimating future observations based on a current model lines up with the framing of the changepoint problem, which assumes that there is a shift in the model after a changepoint, and the methodology enables the handling of cases where there are multiple changepoints and cases where there are no changepoints. Furthermore, with minor modifications (which are not discussed in this paper), the future window technique could handle situations whereinwhich the user does not have the entire data set all at once but, instead, receives pieces of data overtime.
3.3.1 Univariate Forecasting Methods
For the following univariate methods, we assume that — between changepoints — frames close to each other are temporally related and, thus, the SVM or CNN output for the frames is stationary [30]. Furthermore, we utilize the future window technique in conjunction with each of these univariate models; all of the values in the future window are used to reestimate the model when a changepoint is found.
We first utilize a onelag autoregressive model, which accounts for the correlation between values in a time series by predicting the next value in the series on the basis of the previous observation. Our threshold for the future window technique is the standard deviation of the entire time series
[35, 7]. Next, we combine the future window technique with a mean model, which computes the mean for a set number of observations and compares the values in the future window against this mean. We again use the standard deviation of the time series as the threshold for the future window technique [21].Finally, we develop what we will call the “signchange filter.” Between each potential changepoint identified by a univariate algorithm, this filter computes the average of the CNN or SVM scores and then finds the sign of each average. If the sign of the average does not change at a potential changepoint, we eliminate the changepoint from the final output. This filter significantly increases the above methods’ precision.
3.3.2 Multivariate Forecasting Methods
The BoVW histograms provide us with a succinct representation of a frame by counting the number of keypoints (in a frame) associated with each visual word. We can apply methods directly to this time series, which is an unsupervised way of approaching the changepoint detection problem. We utilize histogram comparison methods in conjunction with the future window technique to accomplish this goal. We apply this methodology to the raw BoVW histograms and to condensed representations.
We produce condensed representations by employing the agglomerate clustering algorithm to group the BoVW centroids that are within close proximity to each other [11]. For our specific application, we apply the algorithm to the first onehundred centroids, which represent features corresponding to the negative state, and then separately to the second set of onehundred, which represent features corresponding to the positive state. After constructing the cluster tree, we choose the clusters of visual words such that we simplify the histograms without significantly reducing their informational content, and we achieve this by choosing an inconsistency coefficient cutoff. For each histogram, we use the our chosen clusters to aggregate the key points into new “bins” [19].
To handle these multivariate representations, we utilize the chisquared goodnessoffit test and the match distance in conjunction with the future window technique. For each of these histogram comparison methods, we set the first observation in the series as the “baseline model” and, when we find a changepoint, we set the new “baseline model” as the histogram in the beginning of the future window.
The chisquared goodnessoffit test computes the squared differences between the bins of two histograms (call one the “observed” and one the “expected”) and, for each bin, divides the squared difference by the number of elements in the “expected” histogram’s bin, as demonstrated below
where is the number of elements in the th bin of the “observed” histogram, is the number of elements in the th bin of the “expected” histogram, is the number of bins, and
is the degrees of freedom. A
value is computed for the resulting chisquared value and compared to an alpha value, with the null hypothesis stating that the two histograms are similar and rejection of the null hypothesis indicating they are not. The expected histogram is the baseline histogram, and the observed histograms are the histograms in the future window. The threshold for the future window technique is the alpha level for the test so, if the values associated with the chisquared values of the histograms in the future window are all less than alpha, we declare a changepoint at the beginning of the window [38, 25].The match distance finds the cumulative sum for each of two histograms, finds the absolute difference between the two sequences of partial sums, and then sums this resulting sequence. This can be summarized by the equation:
where denotes the number of bins and is the cumulative sum of the elements of up until and including bin [25]. We find the match distance between the baseline histogram and each of the histograms in the future window. To set our threshold for the future window technique, for each feature/bin, we find the mean of the differences between successive observations in the series; we then sum these means and multiply this value by a constant. Both of these histogram methods were applied to the raw histograms and the condensed histograms.
3.4 Hidden Markov Model
In a hidden Markov model (HMM), the system being modeled is a sequence of discrete latent states, which, in our project, are the ground truths of whether the frames corresponds to the positive state. Such sequence is modeled using a Markov chain, in which the conditional probability of the future state only depends on the present state. Each type of state has an associated emission probability, according to which an output is assumed to be generated. While each latent state is not directly observable, the associated output is observable. Our goal was to construct the most probable sequence of latent states, given the sequence of associated classifier scores. Let
be a sequence of latent states, where if the frame corresponds to the negative state and otherwise. Let be the associated sequence of classifier scores. The initial distribution is given by , so thatThe transition matrix of latent states is denoted as , where and
. We modeled the conditional distributions of observed variables using Gaussian distribution:
where is the set of emission parameters. The structure of HMM is illustrated in Figure 2.
Estimates of parameters , , and
were computed by applying ExpectationMaximization (EM) algorithm and the BaumWelch algorithm
[3]. After this step, the most likely sequence of latent states were inferred using the Viterbi algorithm [37].HMM is an unsupervised method, in which labels are not required for training. In principle, one can estimate parameters and then use these estimates to infer the most probable sequence of states using the same sequence of scores. In this project, however, we are more interested in evaluating how well a trained HMM can generalize to a new video. Our training and testing data sets were designed in the following way.
Videos with at least one exit or entrance into five folds are split into five folds. A HMM model is trained on four folds, and we apply a SavitzkyGolay filter [27] on the sequences of scores corresponding to videos in the remaining folds. The filtered sequences of scores are input into the trained HMM model. We declare a changepoint occurs if two adjacent latent variables are inferred to have different states. This process is repeated five times, with each fold being the testing set exactly once.
In another experiment, the goal is to estimate the precision of HMM on all videos, including those without exit or entrance. To test HMM on videos without actual changepoints, we apply a HMM trained on all videos that contain at least one actual change point. The results are presented in Table 3.
3.5 Maximum Likelihood Estimation
Solving problems through maximum likelihood estimation is a common technique in the machine learning literature. The goal in maximum likelihood estimation is to find the values of some parameters which are most likely given the data available. Here, we develop a maximum likelihood formulation of the changepoint detection problem, where the parameters we are trying to find are the true state labels, . Let be the ground truth labels of a series and be the labels from a classifier with accuracy . Then, we can find the loglikelihood of a series of labels given the data as follows.
Using integer programming (IP), we maximize this quantity. The essential addition in the IP formulation is a constraint on the number of changepoints allowable; otherwise, the algorithm will not be robust to any sort of noise. The IP optimizes over values of L as follows:
maximize  
subject to  
The expression to maximize is the likelihood, the first constraint limits the number of change points, and the second constraint ensures that there are only two labels, one for each state. The results can be found in Table 2 and Table 3.
4 Experimental Results
We now present the results of applying our framework to a data set provided by the LAPD. We define changepoints in this data set as the places where an officer exited or entered a vehicle. Our two states of interest are inside and outside a vehicle, and being outside of a car corresponds to the positive state in our framework. These changepoints are important because policepublic interactions often occur when officers are outside of their vehicles.
4.1 Data Set Description
Our data is provided by LAPD, from their BWV pilot program in Los Angeles’ Central Division in 20142015. The bodyworn videos were recorded using cameras that have roughly a 130° fieldofview, a resolution of 640x480, and a fisheye lens. All videos are from the officer’s pointofview, as body cameras are mounted on officers’ chests.
There are 691 videos in our data, with an average length of 9 minutes. 420 of these videos contain at least one changepoint of interest (either a vehicle entrance or exit), up to a maximum of 11. Of these videos, 270 of them begin from the driver’s side, 176 are during the nighttime, and in 274 of them, the vehicle is moving at some point during the video. In addition, some videos contain occasional camera fieldofview occlusions from the officers’ hands, arms, or clothing. The overall effect is that this data set is highly varied, and it presents many of the challenges that one might expect from realworld video data — unclear images, rapid camera movement, extreme luminance and contrast differences, etc.
4.2 Training and Testing Sets Description
Since SVM is more sensitive to redundancy in training set, we prepare different training and testing sets for SVM and CNN. SVM is sensitive to redundancy in training set because the learned decision boundary may be shifted in response to aggregated penalties imposed by repeated examples that lie on wrong side of margin. This poses a challenge to our project: as content of consecutive video frames are often highly correlated, these frames’ representations are expected to be quite similar. We therefore manually select video frames that go into a data set for training and testing SVM to reduce the impact of redundancy in video data. Out of the 420 videos that contain at least one entrance or exit of a car, we take 200 of them and then randomly assign these 200 videos into 10 folds. For each of these selected videos, we select ‘incar” and “outcar” frames, and the resulting data set has 515 “incar” frames and 529 “outcar” frames. In a trial, a SVM is trained using nine folds and tested on the remaining fold. This process is repeated ten times, and each fold is used as the testing fold exactly once. The testing accuracy on the ten trials are then averaged to give one estimate.
Training for the CNN proceeds by 10fold crossvalidation on the entire data set of 691 videos. First, videos are split into ten different folds. Then, from the videos, frames are extracted every one second, to form a data set of approximately 466,000 frames. No deletions or selections were made, and all frames are retained. During the training process, one fold is held out, and the CNN is trained on nine folds. Performance statistics are computed for each of the ten folds, and then averaged. Averaging is a valid way to combine performance statistics in our case, because the number of frames and incar/outofcar percentages are roughly equal across folds.
4.3 Classification Results
This section presents performance evaluations of our classifiers, SVM and CNN. Figure 2(a) plots classification accuracy of SVM with spatial pyramid match kernel and hard histogram configuration versus number of clusters. The choice of , which determines the total number of levels, has significant impact on classifier’s performance. As increases from to , which implies the spatial information of each keypoint is now taken into account, classifier’s performance improves greatly. As increases from to , each frame is partitioned into finer cells, and the extra spatial information also contributes to improvements in classification accuracy. The results also show that as the size of vocabulary increases, spatial information becomes less important. This observation is consistent with results in [39]. Figure 2(b) compares performance of SVM with hard and soft histogram configurations. To obtain these results, parameter is fixed at and we let parameter vary. As shown in the figure, soft VQ technique generally improve classification accuracy. For , SVM with soft histogram outperform that with hard histogram at every size of visual vocabulary.
For CNN, the VGG16 convolutional network architecture is modified for generalization, and to use the hinge loss function, as described in Section 2.2. After this modification, all weights in all layers except for the last layer (the output layer) are frozen so that weight updates are not computed for them. For preprocessing, frames are then resized to 240x320, and the mean pixel value reported by the VGG16 authors is subtracted from each color channel. The network is then trained via stochastic gradient descent with a minibatch size equal to the size of the training set. Elastic net weight regularization, described in Section 2.2, is used with
and a penalty coefficient of 0.0003. The learning rate is initialized and scheduled according to an adaptive scheme, and decreases at every epoch. The network is trained for six epochs. Results are shown in Table
1. Implementation is carried out using the TensorFlow
[1], Keras
[6], and scikitlearn software libraries [23].Classifier  Accuracy  Precision  Recall 

Best Convolutional Neural Network  94%  96%  95% 
Best Support Vector Machine  90%  92%  89% 
The convolutional network results show the large improvement in performance statistics gained by using deep feature represenations of an frame (e.g. those computed by a ConvNet) as opposed to shallow feature representations (e.g. those computed by the BoVW process). We believe that more sophisticated training methods (such as jointly training a changepoint detection method and CNN), or unfreezing the weights of the adapted VGG16 network may be able to produce more accurate scene classifications.
4.4 Changepoint Detection Results
In this section, we discuss the results of our changepoint detection methods. As stated in the beginning of Section 4, we aim to identify points of vehicle entry and exit. For each video in our data set, we apply a classifier — either CNN or BoVWSVM — to every th frame (n = 30, 10 respectively). The classifiers output scores or class labels in {0,1}. We then run each sequence through each of five univariate changepoint detection methods; for each sequence, we identify some number (possibly zero) of changepoints. We also run sequences of multivariate, unsupervised frame representations through multivariate changepoint detection algorithms and, for each sequence, we identify some number (possibly zero) of changepoints.
To evaluate the performance of all of our changepoint detection algorithms on our data set, we compare our algorithms’ predicted changepoints to the true changepoints in the videos (where officers actually exit/enter their vehicles). We use a tensecond window of error, so a predicted changepoint and a true changepoint are considered equivalent if they are within ten seconds of each other. This accounts for the fact that it may take several seconds to exit or enter a vehicle. We then calculate precision and recall for each method to evaluate our performance — where recall is the percentage of actual changepoints which are within ten seconds of a predicted changepoint, and precision is the percentage of predicted changepoints which were within ten seconds of an actual changepoint. These are aggregate measurements for all of the videos, meaning we count the total number of actual changepoints and predicted changepoints across all videos.
This section is organized as follows. First, we apply our methods on videos that contain at least one actual changepoint using outputs from the CNN. Our algorithms are then tested on the full data set which contains videos without actual change points, and the results are jointly presented. We then discuss the results of running changepoint detection algorithms on SVM outputs. Finally, we present the results of our changepoint detection methods on multivariate, unsupervised representations.
As mentioned in Section 4.2, we undertake crossvalidation to produce scores for all 691 videos using the CNN along with a traditional neural net classifier. Table 2 shows the results of applying changepoint detection algorithms on the CNN output for the 420 videos that contain at least one exit or entrance. While the five methods discussed in Section 3 give comparable recall and precision, HMM produces the highest recall of 93%, and MSE gives the highest precision of 75%.
We further test our changepoint detection methods on CNN scores for the full data set containing 691 videos in total, 271 of which do not contain an entry into or an exit from a car. As shown in Table 3, recall calculations remain the same as those presented in Table 2, as these 271 videos do not contribute to the total number of actual changepoints. Precision calculations, however, decrease because of false alarms.
For each of these methods, we can adjust some parameters. MSE uses a median filter window size of 30, a value cutoff of 0.1 with Bonferroni correction, and a maximum recursive depth of 3; it acts on the CNN binary labels. The autoregressive and mean model forecasting methods use the sample standard deviation of the series as the future window threshold, a future window of five, the first five observations to establish the baseline model, and the signchange filter; they act on the CNN scores. MLE uses a parameter for classifier accuracy of 0.9, and a constraint on the number of allowable changepoints of 10; it acts on the CNN binary labels. For HMM, the size and the polynomial order of the SavitzkyGolay filter are set to 15 and 1 respectively; this method acts on the CNN scores.
Method  Recall  Precision 

Hidden Markov Model  93%  72% 
MeanSquared Error Minimization  88%  75% 
Forecasting Method – Mean Model  88%  70% 
Maximum Likelihood Estimation Method  88%  67% 
Forecasting Method – Autoregressive One Lag  85%  70% 
Method  Recall  Precision 

Hidden Markov Model  93%  65% 
MeanSquared Error Minimization  88%  68% 
Forecasting Method – Mean Model  88%  61% 
Maximum Likelihood Estimation Method  88%  58% 
Forecasting Method – Autoregressive One Lag  85%  60% 
Table 4 presents the results of applying the changepoint detection methods on the SVM scores. By comparing Table 4 with Table 2, we conclude that the precision measurements we calculate after running the changepoint algorithms on SVM scores are significantly lower than the precision measurements we calculate after running the algorithms on CNN scores. Recall measurements, however, are generally comparable, except for MLE, whose recall decreases from 88% to 66%. From this piece of empirical evidence, we conclude that the performance of classifiers have a significant impact on the precision of changepoint detection results.
Again, there are parameter values which we can adjust. For the SVM output, the autoregressive forecasting method uses the sample standard deviation of the series as the future window threshold, a future window of five, and the first five observations to establish the baseline model; it acts on the SVM scores. The mean model forecasting method uses the sample standard deviation of the series as the future window threshold, a future window of seven, the first ten observations to establish the baseline model, and the simple rounding filter (which rounds changepoint values to the nearest thirty because of the way frames were sampled for the SVM); it acts on the SVM scores. MSE, MLE, and HMM use the same parameters as described above, but they act on SVM scores.
Method  Recall  Precision 

MeanSquared Error Minimization  91%  30% 
Forecasting Method – Mean Model  96%  18% 
Hidden Markov Model  90%  17% 
Forecasting Method – Autoregressive One Lag  90%  17% 
Maximum Likelihood Estimation Method  66%  34% 
Finally, Table 5 presents the results of changepoint detection using multivariate data which primarily comes from the BoVW histograms. These multivariate histograms represent the frames in their entirety, so they do not classify the frames into states — unlike the scores and labels from the SVM and CNN. Therefore, our methods may be detecting changepoints in the video apart from exits from and entrances into vehicles. Consequently, these methods may have fairly low precision values because they are detecting other changes besides car exits and entrances, and our precision measurements are just concerned with the car exit and entrance changepoints. It is also worth noting that, for our condensed histogram representations, we yield similar results as the full histogram results recorded in Table 4. Table 4 results are slightly better than results obtained using the condensed representations but, nevertheless, we realize that the histograms can be simplified without large losses in recall and precision.
As with our univariate methods, our multivariate methods have some parameter values which we can adjust. MSE uses the same parameters as outlined in the univariate results section. The chisquared test uses an alpha level of 0.001, a future window of seven, and the baseline for comparison as the first histogram. The match distance uses a constant of 20 times the threshold discussed in Section 3.3, a future window of 10, the first histogram as the baseline, and the simple rounding filter (which rounds changepoint estimations to the nearest thirty because of the way frames were sampled).
Method  Recall  Precision 

ChiSquared Test  100%  20% 
Match Distance  98%  13% 
MeanSquared Error Minimization  86%  17% 
Above, we show both precision and recall for all our methods. In tuning our parameters, we prioritize recall over precision because, in a law enforcement application, we want to ensure to the best of our ability that we do not miss any important events. Our methods mostly achieve 8590% recall (MLE’s recall is lower on SVM scores because it has fewer parameters to optimize, so we have less flexibility in choosing how we would like to manage the precisionrecall tradeoff.). The methods run on SVM scores yield a 1535% precision, and the methods run on CNN scores yield a 5868% precision. It is interesting to note the large discrepancy in changepoint detection results for our different classification methods, despite a relatively small discrepancy (roughly 5%) in their classification accuracy. It seems that a small improvement in classification performance can cause a large increase in precision of changepoint detection. Finally, for the CNN results in particular, we see that many of the methods yield quite similar recall and precision values. This suggests there are multiple ways of approaching the changepoint detection problem — enabling a user to choose a method based on additional considerations such as algorithmic speed.
5 Conclusion
In this paper, we present a novel framework for changepoint detection in video, using concepts from machine learning, image recognition, and changepoint detection. We outline our methods for classification at the frame level, including CNNs and feature extraction techniques. We then describe methods from four approaches to changepoint detection: mean square error minimization, forecasting, hidden Markov models, and maximum likelihood estimation. We present the performance of these methods on classifier output and on BoVW histogram representations. With our multivariate methods, we discuss the challenges of applying changepoint detection methods to flexible, unsupervised, and multivariate representations.
Testing specifically for identification of vehicle entrances and exits, our methods succeed with 90% recall and nearly 70% precision on a highly complex, realistic data set provided by the LAPD. However, we believe our framework is highly adaptable to different changepoint classes, both within the domain of law enforcement BWV and outside of it. For instance, with an appropriately relabeled data set, we believe our framework would succeed comparably at identification of video segments where an officer is speaking to a member of the public, handcuffing a suspect, or engaging in a foot chase.
With this in mind, the framework presented in this paper represents a promising step toward law enforcement’s longterm goal of automatic video tagging of important segments. This would make the largescale deployment of BWV (one camera to each officer in a police force) much more feasible.
6 Acknowledgments
First, we would like to thank our academic mentor, Dr. Giang Tran, for her guidance and continuous support. Her suggestions have helped us tremendously throughout this research. We would also like to thank Sgt. Javier Macias and Dr. Jeff Brantingham as our industry mentors; they have provided us with important context for this project. This work was completed as part of the 2016 Research in Industrial Projects for Students (RIPS) program at the Institute for Pure and Applied Mathematics (IPAM) and was supported by grants from the LAPD and NSF.
References
 [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, TensorFlow: Largescale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
 [2] R. P. Adams and D. J. MacKay, Bayesian online changepoint detection, arXiv preprint arXiv:0710.3742, (2007).
 [3] L. E. Baum, An equality and associated maximization technique in statistical estimation for probabilistic functions of markov processes, Inequalities, 3 (1972), pp. 1–8.
 [4] P. Bouthemy, M. Gelgon, and F. Ganansia, A unified approach to shot change detection and camera motion characterization, IEEE Transactions on Circuits and Systems for Video Technology, 9 (1999), pp. 1030–1044.
 [5] J. Chen and A. K. Gupta, On change point detection and estimation, Communications in statisticssimulation and computation, 30 (2001), pp. 665–697.
 [6] F. Chollet, keras. https://github.com/fchollet/keras, 2016.
 [7] J. R. Evans, Statistics, Data Analysis, and Decision Modeling, Prentice Hall: Pearson Education Inc., 5 ed., 2013.

[8]
K. Fukushima,
Neocognitron: A selforganizing neural network model for a mechanism of pattern recognition unaffected by shift in position
, Biological cybernetics, 36 (1980), pp. 193–202.  [9] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Book in preparation for MIT Press, 2016.
 [10] D. H. Hubel and T. N. Wiesel, Receptive fields and functional architecture of monkey striate cortex, The Journal of physiology, 195 (1968), pp. 215–243.
 [11] Y.G. Jiang and C.W. Ngo, Visual word proximity and linguistics for semantic video indexing and nearduplicate retrieval, Computer Vision and Image Understanding, 113 (2009), pp. 405–414.
 [12] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. FeiFei, Largescale video classification with convolutional neural networks, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
 [13] C. M. Katz, D. E. Choate, J. R. Ready, and L. Nuño, Evaluating the impact of officer worn body cameras in the phoenix police department, Phoenix, AZ: Center for Violence Prevention and Community Safety, Arizona State University, (2014).
 [14] K. Kitani, Bagofvisualwords, 16385 computer vision. http://www.cs.cmu.edu/ 16385/lectures/Lecture12.pdf.
 [15] A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds., Curran Associates, Inc., 2012, pp. 1097–1105.
 [16] S. Lazebnik, C. Schmid, and J. Ponce, Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories, in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2, IEEE, 2006, pp. 2169–2178.
 [17] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, Backpropagation applied to handwritten zip code recognition, Neural computation, 1 (1989), pp. 541–551.
 [18] D. G. Lowe, Distinctive image features from scaleinvariant keypoints, International journal of computer vision, 60 (2004), pp. 91–110.
 [19] MathWorks, Hierarchical clustering. http://www.mathworks.com/help/stats/ hierarchicalclustering.html, 2016.
 [20] L. Miller and J. Toliver, Implementing a bodyworn camera program: Recommendations and lessons learned, 2014.
 [21] R. Nau, Mean (constant) model. http://people.duke.edu/ rnau/411mean.htm, n.d.
 [22] M. Oquab, L. Bottou, I. Laptev, and J. Sivic, Learning and transferring midlevel image representations using convolutional neural networks, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
 [23] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Scikitlearn: Machine learning in Python, Journal of Machine Learning Research, 12 (2011), pp. 2825–2830.
 [24] A. Ranganathan, Pliss: Detecting and labeling places using online changepoint detection, in Proceedings of Robotics: Science and Systems, Zaragoza, Spain, June 2010.

[25]
Y. Rubner, C. Tomasi, and L. J. Guibas,
The earth mover’s distance as a metric for image retrieval
, International journal of computer vision, 40 (2000), pp. 99–121.  [26] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning internal representations by error propagation, tech. rep., DTIC Document, 1985.
 [27] A. Savitzky and M. J. Golay, Smoothing and differentiation of data by simplified least squares procedures., Analytical chemistry, 36 (1964), pp. 1627–1639.
 [28] J. Schmidhuber, Deep learning in neural networks: An overview, Neural Networks, 61 (2015), pp. 85–117.
 [29] K. Simonyan and A. Zisserman, Very deep convolutional networks for largescale image recognition, CoRR, abs/1409.1556 (2014).
 [30] A. H. Studenmund, Using Econometrics: A Practical Guide, AddisonWesley: Pearson Education Inc., 6 ed., 2011.
 [31] J.i. Takeuchi and K. Yamanishi, A unifying framework for detecting outliers and change points from time series, IEEE transactions on Knowledge and Data Engineering, 18 (2006), pp. 482–492.
 [32] Y. Tang, Deep learning using linear support vector machines, arXiv preprint arXiv:1306.0239, (2013).
 [33] W. Taylor, Changepoint analysis: A powerful new tool for detecting changes. http://www.variation.com/cpa/tech/changepoint.html, 2000.
 [34] G. Tsechpenakis, D. N. Metaxas, C. Neidle, and O. Hadjiliadis, Robust online changepoint detection in video sequences.
 [35] P. S. University, Stat 501 regression methods: Autoregressive models. https://onlinecourses.science.psu.edu/stat501/node/358, 2016.
 [36] V. Viitaniemi and J. Laaksonen, Spatial extensions to bag of visual words, in Proceedings of the ACM International Conference on Image and Video Retrieval, ACM, 2009, p. 37.
 [37] A. Viterbi, Error bounds for convolutional codes and an asymptotically optimum decoding algorithm, IEEE transactions on Information Theory, 13 (1967), pp. 260–269.
 [38] R. E. Walpole, R. H. Myers, S. L. Myers, and K. Ye, Probability and Statistics for Engineers and Scientists, Pearson Education Inc., 9 ed., 2012.
 [39] J. Yang, Y.G. Jiang, A. G. Hauptmann, and C.W. Ngo, Evaluating bagofvisualwords representations in scene classification, in Proceedings of the international workshop on Workshop on multimedia information retrieval, ACM, 2007, pp. 197–206.
 [40] I. T. Young, Proof without prejudice: use of the kolmogorovsmirnov test for the analysis of histograms from flow systems and other sources., Journal of Histochemistry & Cytochemistry, 25 (1977), pp. 935–941.
Comments
There are no comments yet.