Temporal Unknown Incremental Clustering (TUIC) Model for Analysis of Traffic Surveillance Videos

04/18/2018 ∙ by Santhosh Kelathodi Kumaran, et al. ∙ Indian Institute of Technology Bhubaneswar IIT Roorkee 0

Optimized scene representation is an important characteristic of a framework for detecting abnormalities on live videos. One of the challenges for detecting abnormalities in live videos is real-time detection of objects in a non-parametric way. Another challenge is to efficiently represent the state of objects temporally across frames. In this paper, a Gibbs sampling based heuristic model referred to as Temporal Unknown Incremental Clustering (TUIC) has been proposed to cluster pixels with motion. Pixel motion is first detected using optical flow and a Bayesian algorithm has been applied to associate pixels belonging to similar cluster in subsequent frames. The algorithm is fast and produces accurate results in Θ(kn) time, where k is the number of clusters and n the number of pixels. Our experimental validation with publicly available datasets reveals that the proposed framework has good potential to open-up new opportunities for real-time traffic analysis.



There are no comments yet.


This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Real time surveillance is posing a big challenge to the researchers since number of cameras are increasing in leaps and bounds. It is a difficult task to employ a large number of human operators for monitoring huge amount of visual data. Also, it is not possible for human observers to detect all abnormal activities as humans face difficulty to maintain certain level of alertness for a sustained period. Hence automated methods for continuous analysis of surveillance videos have to be introduced. However, while developing such methods, it is a prerequisite to detect abnormal/unusual events on a timely manner such that suitable actions can be taken at the earliest. Moreover, surveillance systems are often equipped with large number of cameras that produce enormous amount of data. Therefore, online storing (while recording) of event(s)-of-interest or useful segments for future processing, can be adopted. This emphasizes the importance of real-time video event detection.

One of the ways to represent video events, is to detect patterns of movement of dynamic objects. In order to detect events in real-time, object(s)-in-motion need(s) to be represented in an analysis framework. Then, a suitable algorithm can be developed to classify the events. In our work, we propose a nonparametric model derived from DPMM and have devised a distance based unsupervised learning scheme to localize moving objects on the run. Our proposed algorithm has been found to produce real-time performance on publicly available surveillance videos.

I-a Related Work

There are a few interesting proposals for modeling the object motion that are based on optical flow [10, 33]. These frameworks are offline and they do not use object trajectories as input signal. However, a few popular methods use trajectory as the base-level information [19, 26, 2, 28, 24, 14, 31, 22]. Consequently, trajectory learning and classification are two of the central tasks for any video analytic method. There are studies on supervised approaches such as [19, 26, 27, 9, 6, 2, 23] that are based on labeled dataset. Unsupervised approaches such as [25, 1, 28, 24, 14] use unlabeled dataset to cluster similar trajectories and use clustered data to train models for classification. Tracking [18, 13, 35, 8] is also an important task to build a complete traffic analysis framework. A recent work proposed in [34] defines an offline model for tracking using Dirichlet process that is based on variational inference [17]. An approach often referred to as incremental clustering, has been used in [14] and [31]. The method works in the absence of complete training data. They have processed the data sequentially. The approach is particularly relevant in surveillance applications since training data may not be available always.

The learned model representing trajectory patterns can be used for varying purposes irrespective of the underlying method of training. A few of these methods address abnormality detection [31, 22, 15], while others perform classification and abnormality detection together [19, 6]. Trajectory retrieval [14, 3] is another possible application. An important property of such a framework is, online classification and abnormality detection with partially observed trajectories. The problem has been addressed in [19, 6, 2]. This is important when timely actions are to be taken in response to an observed event.

(a) Original frame
(b) Optical flow magnitude
(c) Optical flow direction
(d) Labeled scene
Fig. 1: Optical flow and association of pixels with moving objects.

I-B Motivation of the Research

An unsupervised, non-parametric, incremental, real-time framework for event detection can be a good choice for surveillance applications as it will reduce the dependency on human operators. However, the problem is challenging due to the complexity of scene interpretation. This is more difficult as everything needs to be built from pixel-level information. The work proposed in [14] and [31] can be adopted for online learning. However, their model is complex and it requires large number of iterations to cluster pixels to form the objects.

Fig. 1 depicts optical flow in a frame taken from the VIRAT dataset [7]. Fig. 1 (a) represents the original frame and Fig. 1 (b) and Fig. 1 (c) represent magnitude and direction of flow, respectively. We are motivated by visual clue presented in the figures. It has been observed that the pixels in motion have a distribution similar to a multivariate Gaussian process. Consider two pixels and as marked in Fig. 1

(d). Probability of

belongs to “car #1” is expected to be higher than it belongs to “car #2” as is closer to “car #1” than “car #2”. Similarly for , probability of this pixel belongs to “car #2” is expected to be higher than it belongs to “car #1”. We have used this distance information for deriving an inference scheme that is fast and logical to be applied on Dirichlet Process Mixture Model (DPMM) [32] based on Dirichlet Process [12].

Inference process [29, 5] in DPMM takes multiple iterations for clusters to converge. We have introduced a distance function in the inference process of DPMM to expedite the convergence. Distance Dependent Chinese Restaurant Process (DDCRP) [4] underlines the usage of distance in the inference process for faster convergence. In DPMM, the number of clusters formed on a given data depends on the concentration parameter () of the model described in (1-4).


Here, corresponds to the data and the corresponds to the latent variable, representing cluster labels, taking one of the values from , where is the number of data points and is the number of clusters.

is a vector of length

. represents the mixing proportion of data among the clusters, or the probability of taking the value . is the parameter of the cluster and denotes the distribution defined by . First, we pick from a Discrete distribution given in (1) and then generate data from a distribution parameterized by as given in (2), where the parameter is derived from a Dirichlet distribution as given in (3) and is derived from distribution of priors as represented in (4). The model is graphically [20] presented in Fig.2 (a).

(a) Conventional Dirichlet Process Mixture Model (DPMM).
(b) Proposed Object Model
(c) Proposed Temporal Unknown Incremental Clustering (TUIC) Model
Fig. 2: Model Evolution. Here, the dashed line represents the deterministic relation between and , where .

Existing work [4, 34, 30, 21, 14] do not emphasize much on how to come up with a suitable value of the concentration parameter for a given application. This paper derives a relationship between concentration parameter and distance function. Moreover, the method uses the distance information for associating the pixels to the same object on successive frames, thus addressing the temporal association of pixels to a cluster. This way, we are able to address both spatial and temporal dependencies of pixels belonging to the same object and thus makes it an the ideal choice for clustering pixels for segmentation and tracking applications.

Our experiments reveal that, a single iteration of Gibbs sampling [29] can be sufficient to associate majority of the pixels that ensemble an object-in-motion to a single cluster. The cluster association can be maintained as long as the objects remain in motion. This concept can be extended for feature based clustering or segmentation, as distance between the pixels having similar characteristics is expected to be small. Thus, they can be grouped into a single cluster. We refer to this as Temporal Unknown Incremental Clustering (TUIC) model and the framework can be used to built tracking and surveillance applications.

I-C Research Contributions

This paper presents an incremental and hierarchical way of associating pixels to objects. Since the label of an object is maintained throughout its lifetime within the scene, this can further be extended hierarchically to derive most frequently used paths of the moving objects. Thus, we develop a framework that can be used during online detection of abnormal activities in surveillance videos. The main contributions are summarized as follows:

  • A distance-based method for associating pixels to a cluster considering spatial as well as temporal properties of the moving objects.

  • We propose a method for deriving the concentration parameter () in a given context or application.

  • Critical analysis of during spatio-temporal segmentation of objects in various contexts, e.g. moving car surveillance, human motion analysis, outdoor surveillance, etc.

  • We propose a Temporal Unknown Incremental Clustering (TUIC) model for deriving an incremental learning framework that can be used for real-time activity detection.

The rest of the paper is organized as follows. In Section II, proposed methodology is discussed. Section III presents experimental setup, dataset, parameters, analysis of results, and a few limitations of the proposed method. Section IV concludes our work with discussion on a few future direction of the present work.

Ii Proposed Methodology

Ii-a Background

First, we discuss the terminologies used in the paper. Observation and data are used interchangeably. They represent words in text corpora or pixels in video frames. Similarly, we refer cluster or topic to represent distribution of data. A model is a representation of a real-world phenomenon. Model can be parametric or non-parametric. A parametric model is a family of distributions that can be described using finite set of parameters. Parametric model has a fixed number of parameters, while the number of parameters grow with the increase of training data for non-parametric model. A mixture model is a probabilistic model for representing the presence of sub-populations within an overall population. Mixture models can be finite or infinite. A finite mixture model is a probabilistic model representing a distribution of data from finite number (

) of sub-populations represented using a finite set of probability distributions. As

, we get infinite mixture models. The Dirichlet distribution is the generalization of Beta distribution for multiple outcomes. A Dirichlet Process (DP) 

[12] is a distribution over probability distributions and is used in Bayesian non-parametric models, particularly in infinite mixture models known as Dirichlet Process Mixture Models (DPMM). Latent variables are variables that are not directly observed, rather inferred from other observed variables. We use Graphical Model [20]

for representing mixture models. A graphical model is a probabilistic model for which a graph expresses the conditional dependence structure between the random variables. Random variables are represented by circles. Boxes are plates representing replicates. A graphical model represents the generative model of the data.

Ii-B Proposed Object Model

In this paper, we use observation or data to represent pixel belonging to an object(s)-in-motion. Topic or cluster is used to represent the objects. Temporal segment or trajectories represent the tracks of the object(s)-in-motion. Any video frame can be modeled as a distribution of pixels belonging to objects and background.

We have the following hypothesis to apply our proposed model for vehicular traffic analysis:

  • Since vehicles are rigid objects, all pixels belonging to a vehicle go through similar motion.

  • Size of the vehicular object in the image frames do not vary significantly within a short duration (between consecutive frames).

  • We assume that, the videos are captured from top view (near top view) using a static camera. Under normal circumstances, the vehicular objects present in an image frame are expected to be spatially separated, i.e. they do not overlap.

Here, we illustrate the rationale for building the model from a different perspective. Unlike the mixture of Gaussian as presented in Fig. 3, the vehicles are rigid body objects. The above assumptions make the problem simple as it indicates that the observations strictly belong to one topic. In addition to that, the pixels in the rigid body go through similar motion, i.e. they have similar magnitude and direction.

Fig. 3: A Gaussian mixture representing four components. The centers are marked with circle.

Let be the random variable representing the observation, where . is a discrete latent variable representing the cluster label. It can take values from . Therefore, there are observations in the frame corresponding to clusters that represent the object(s)-in-motion. We want all the observations belonging to an object(s)-in-motion to be labeled correctly. Our goal is to find for all pixels. corresponds to a discrete distribution and each has a set of observations associated with it. The cluster has a proportion of observations. It is represented by that satisfies . It can be observed that, associated with a particular cluster having unknown distribution parameterized by , where and represent mean and covariance of the distribution. Let an observation be represented by {, , , }, where (,) represents the coordinate of the pixel in motion. and represent and components of the motion vector of the pixel. We build the inference scheme from the Bayesian representation of posterior as given in (5).


We can rewrite it in the current context using (6), where denotes the likelihood of with cluster label and

represents the prior probability of cluster

. We know that, prior.


The above equation cannot be used to build the inference process as every parameter except , is unknown. However, it gives a clue to derive the likelihood and prior required in the inference process. We build the clusters incrementally by considering observations one at a time since no information is available at the beginning. If we take the first observation, it forms a new cluster. denotes the set of cluster assignment done so far, excluding that for the observation. Now, in subsequent observations, for example , it is assumed that all observations sampled so far are assigned a cluster label. Using this information, the cluster label () for is found. denotes the number of clusters formed so far. Thus, initially and as the sampling process progresses, it produces the actual number of clusters. denotes the parameter of cluster excluding the observation. This can be calculated since and are known. Similarly, denotes the number of observations present in cluster excluding the observation. The above equation is split to find the probability of to find new cluster label or an existing cluster label as given in (7) and (8), where and represent the prior probabilities of existing cluster and the cluster respectively.


Since the prior satisfies the property , prior for new cluster can be written as and for the cluster as , where is the concentration parameter and . Here, decides the probability of an observation forming a new cluster. denotes the number of observations handled so far excluding observation. Therefore, the problem is reduced to finding the likelihood function. We have described earlier in Fig. 1(d), probability of pixel belongs to car #1 is higher than that of pixel . Similarly for pixel , the probability of it belongs to car #2 is higher than that of car #1. It gives a visual clue about the property of the likelihood function, i.e. the probability is inversely proportional to the distance () of the pixel from the center of the object. Moreover, the probability is close to beyond the periphery of the object. An exponential decay function of the form satisfies the above property. The function also satisfies the condition for to form a new cluster as distance to itself is 0. Thus, likelihood function of a new cluster is . By taking , we rewrite (7) and (8) by (9) and (10).


We can further simplify the equation as the denominator of prior is constant at any sampling point. = is assumed to represent the inference equations in similar form, i.e. in terms of number of observations and exponential decay function. The proportionality symbol is removed by introducing a normalization constant . Now, a generalized formula is given in (11). This equation is the key to find the value of the concentration parameter as can be expressed in terms of distance from the center to a point at periphery of the object. The method is discussed in Section III.

Fig. 4: Examples of a few exponential decay functions.

can be assumed to be in the form of , where is a constant and is the exponent. A few decay functions with are shown in Fig. 4 for assuming Euclidean distance. Now, we can rewrite the above equation using (12).


It can be observed that, when square of the Euclidean distance is used to compute in case of Multivariate Gaussian, it becomes a special case of the distance function . is a special case of squared Euclidean distance for Multivariate Gaussian, where is the covariance matrix for the cluster and the mean of the cluster. Hence the relation given in (12) can be written as  (13) and (14), where is the mean of the distribution.


The formulas given in (13) and (14) represent the inference equations of our object model. The object Model can be represented as shown in Fig. 2 (b). This forms the basis of our proposed Temporal Unknown Incremental Clustering (TUIC) model described next.

Ii-C Temporal Unknown Incremental Clustering (TUIC) Model

We extend the assumptions about the motion of objects across video frames. We build our model based on the following additional assumptions:

  • The objects do not move substantially between successive frames, hence there will be overlap between pixels belonging to object(s)-in-motion between successive frames.

  • The object motion features do not change significantly between and , where represent the time stamp of the frame, i.e. state information does not change significantly between frames.

If pixel belongs to an object in both and frames, the probability of an observation belongs to a cluster is expected to be higher than it belongs to other clusters. This implies, cluster parameters are approximately equal between successive frames, i.e. . However, they may not be exactly same. If Gibbs sampling is performed using as a prior for the frame, not only the convergence becomes faster, but also the cluster labels are maintained between consecutive frames. The inference can be done as per (15) and (16) with exactly one iteration of the Gibbs sampling. The rationale behind using only one iteration per frame is that, even if all the observations do not get clustered correctly in the current frame, they are essentially done in the subsequent frames since the features do not change significantly between consecutive frames. Here, is different from the discussed earlier. It represents the set of all cluster assignments except for such that it includes only the latest elements between and for any . is the parameter representing the distribution corresponding to cluster in time-stamp from the set of observations corresponding to , where is the mean of distribution. denotes the number of observations in and is the normalization constant. Our proposed model is represented in Fig. 2 (c) as a generative model and can be represented using  (17-20).


Here, corresponds to the data at time and corresponds to the latent variable representing cluster labels, taking one of the values from . is the number of data points and is the number of clusters. is a vector of length . represents the mixing proportion of data among clusters. is the parameter of the cluster and denotes the distribution defined by . First, we pick from a Discrete distribution given in (17). The data is then generated from a distribution parameterized by as given in (18), where the parameter is derived from a Dirichlet distribution as given in (19). is derived from another distribution of prior as represented in (20). It may be observed that, the model is different from the original DPMM shown in Fig. 2 (a). In the original model, there is a conditional dependence between and , or and .

Inference method for cluster assignment uses Gibbs sampling [29]. The process is described in Algorithm 1. Firstly, optical flow  [11] is extracted. A data point is denoted by the 4-tuple (, , , ), where (, ) represent the coordinates of the pixel and and represent x and y components of optical flow vector of the pixel. A threshold has been applied on the magnitude of the optical flow to remove the pixels not having any optical flow. This has been done purposefully to categorize pixels belonging to the background to a single cluster. However, existing background detection methods can be used to push pixels which are irrelevant for clustering.

Input: Input video,
Output: Labelled video

1:Initialize a background cluster , where is a random variable representing 6-tuple corresponding to a cluster. represents the mean value of respective parameters and takes }. represents the time duration of the cluster label ;
2:Initialize , where represents a 2-tuple (, ) containing pixel and label information. Add to ;
4:Get the next frame;
5:for frame(!Empty) do
6:     Get optical flow and fill ;
7:     for each  do
8:         Remove from ;
9:         Find corresponding to MAX [, ] as given in (15) and (16), respectively;
10:         switch (do
11:              case :
12:                  ;
13:                  Create a new cluster ;
14:                  Set in ;
15:                  Update cluster parameters (, );               
16:              case :
17:                  Set in ;
18:                  Update cluster parameters (, );                        
19:     end for
20:     Display the frame with cluster labels;
21:     Get the next frame;
22:end for
Algorithm 1 Temporal Unknown Incremental Clustering (TUIC)

Iii Results and Discussions

In this section, we describe experiments conducted using the proposed TUIC model and results obtained using various public datasets. We also present comparative analysis with state-of-the-art techniques.

Iii-a Experimental Setup and Datasets

We have used OpenCV to implement our proposed framework. Experiments have been conducted on three publicly available traffic datasets, namely VIRAT, MIT, and UCF as mentioned in [7]. Fig. 5 depicts the complete framework of our implementation. Optical flow is calculated from successive video frames using Farne-Back method [11]. A background separation module has been used to extract the motion pixels or foreground. We quantize the motion at each foreground pixel in eight directions. Background pixels are static with no motion in any direction. Motion of the foreground pixels are then considered to construct the model. The blocks highlighted in grey are kept for future extensions required to build a complete traffic analysis framework. Incremental tracking module takes the trajectories or tracklets () generated using the proposed TUIC model. They can also be used to find most frequently used segments of a road. Here, and represent start and end positions of the cluster. Activity detection module can be used for detecting interesting activities from the learned model.

Fig. 5: Proposed traffic analysis framework.

Iii-B Empirical Evaluation of

TUIC model has one parameter () that can influence the results of the inference process. is referred to as the negative exponent of concentration parameter of the model and it decides the size of the object to be traced. In (11), a basis for setting a suitable value of this parameter is presented. If an observation forms a new cluster, it has to be at a distance higher than that of the object periphery. This implies the relation given in (21) holds true.


Therefore, value of

can be estimated using (



Now, if we use maximum distance () to the periphery of the object and number of observations () in an object is known, (22) can be rewritten as (23)


Initially, we set approximate values of and by calculating the distance from the center of the object to its periphery and run the clustering algorithm on a video with single object. If more than one clusters are formed corresponding to the object, we increase or decrease to obtain single cluster corresponding to the object. Finally, we get the actual values of and . Values of and can then be used for calculating . Fig. 6 shows how distance varies over time. can be estimated once the values of and are known. However, even if we fix , object size may not be fixed temporally. This is because of perspective view as the surveillance cameras are often installed to capture long-range views of the scene, they may not capture the top view always.

(a) Euclidean Distance v/s time-stamp
(b) Number of pixels v/s time-stamp
Fig. 6: Critical analysis of the distance function to estimate . It can be observed that the relation between the maximum distance to the periphery and the number of observations present in the clusters are linearly related. This justifies the model taking the distance as a measure for doing the clustering.

Our experimental study also reveals that majority of the objects follow similar trajectory or pattern, thus traces of over time remain similar as depicted in Figs. 6 (a) and (b). It may be noted that the distance gradually increases as the objects enter the scene and reach a peak value which is maintained (or decreases slowly) for some time. Then it decreases suddenly as the objects move out of the scene. However, if top view recording can be obtained, it is expected that will remain flat for longer duration. It can also be observed that number of pixels forming a cluster (object) varies similarly. Therefore, a correlation between and number of pixels can be established. The spikes in the curves are due to noisy observations getting added to the cluster which can be removed using appropriate filtering.

Iii-C Variations in Sampling Orders

Since the observations may have spatial dependency, they cannot be interchanged as proposed in the original Dirichlet Process [11]. Therefore, we have carried out a set of experiments with different sampling orders as listed hereafter to understand the effect of sample ordering.

  • Linear Sampling: Pixels are sampled columnwise starting from the first to the last column. We then go rowwise.

  • Random Sampling: Pixels are sampled in a random order.

  • Spiral Sampling: Pixels are sampled in a spiral manner.

Our experiments reveal that despite variations in sampling ordering, they work fairly well to produce good trajectories with a suitable . All sampling orders produce split clusters at some point of time due to the noises. In all sampling methods, there are issues while the objects leave the scene and they are in close proximity. Some cluster may get merged while approaching the boundary. However the cluster labels are maintained correctly till the objects exit the scene.

Iii-D Selection of the Distance Function

In Fig. 7, we compare clustering results using three different distance functions, e.g. . The results reveal that the performance can vary. However, or look quite similar as depicted Fig. 4. A smaller value of makes sure that the relation between and remains linear. Therefore, we have used in our experiments. According to our observation, the best results are obtained using with .

(a) Frame #164
(b) likelihood =
(c) likelihood =
(d) likelihood =
(e) Value of maximum distance () for object #2
(f) Number of pixels for object #2
Fig. 7: Clustering using different distance functions. produces stable clustering for a fixed value as compared to other likelihood functions. Even though the objects are split into more than one clusters due to noises. A careful observation reveals that the likelihood functions and produces similar curves.

Iii-E Noise Removal

Optical flow corresponding to slowly moving objects may not be significant when the videos are processed at high frame rate. Also, we have observed that the proposed model can be used to track small objects such as humans by processing the videos at lower frame rate. However, results can be affected if the cluster representing an object does not remain live for at least three successive frames. Such clusters are referred to as noise. Clusters that are shown earlier have been selected since they lasted in more than three frames. Alternatively, a moving average filter (MAF) can be applied on the optical flow for smoother features between successive frames. Fig. 8 depicts the noise level and its impact on clustering. The role played by noise in clustering can be interpreted from Fig. 9 and Fig. 10. Since MAF with temporal thresholding gives best results, further experiments are conducted after noise removal using MAF.

(a) Original Noisy Clusters
(b) Frame Skipping
(c) Moving Average Filtering
(d) Moving Average Filtering with Frame Skipping
Fig. 8: Effect of noises in clustering and removal of noises using frame skipping and moving average filter.
Fig. 9: Effect of noise removal on number of valid clusters.
Fig. 10: The effect of MAF on the number of clusters can be seen from the cumulative plot of the #clusters over time.

Iii-F Experiments on Various Public Datasets

We now present the experiment results obtained using various publicly available datasets. We have applied our proposed clustering in two different contexts, namely road traffic analysis and human motion analysis. In order to establish the relation of with the clustering process, a set of experiments have been carried out on VIRAT dataset videos and the results are presented in Fig. 12.

Fig. 11: Variation in number of clusters with different for a particular frame.
(a) Original frame
Fig. 12: Impact of on the performance of the clustering. For smaller values of , say 5 or 15, number of clusters per object have been found to be high. As increases, number of clusters per object reduces. Optimum clustering has been obtained at . More than one objects grouped into a single cluster when is higher.

It has been observed that with a smaller value of , more clusters are usually formed for any single moving object. Fig. 12 depicts how clusters per object vary as varies. Our experiments reveal that more than one objects are merged into a single cluster when a larger value of is used. An example of this phenomenon is depicted in Fig. 12 (f). Graphical plot shown in Fig. 11 depicts how the number of clusters vary within a frame when is varied.

(a) Original Frame
(b) Clusters
(c) Trace
(d) Clusters
(e) Trace
Fig. 13: The figures corresponds to frame #45 of the video used for variation. In spite of betting varied the cluster labels are traced temporally as can be seen in (d) and (f).

Another observation is, actual objects are smaller as compared to the clusters. This happens because the optical flow algorithm estimates magnitude of the flow in neighborhood pixels. This implies, a better approximation of the object can be obtained with more accurate optical flow. We have maintained single cluster label throughout the life of the object as depicted in Fig. 13. If we choose a value corresponding to the smallest object, larger objects may be divided into more than one clusters. Therefore, post processing needs to be used to merge them. This way, our proposed model can be the basis for object motion analysis. For example, we can find whether the object-in-motion is a small vehicle/medium vehicle/large vehicle based on the number of clusters connected together.

We have conducted tests on other public datasets, namely MIT, and UCF videos. Results reveal that the inference scheme is able to cluster the objects incrementally with good accuracy. Such results are presented in Fig. 14 and Fig. 15. We have shown the traces of moving objects using different colors. Unlike VIRAT videos where the objects are fully tracked till the end as they do not occlude, some of the objects present in MIT and UCF videos are represented by more than one clusters. This happens due to a fixed value to facilitate clustering of medium-sized vehicles. Corresponding optical flow reveals that the vectors are overlapping. Moreover, there are vehicles of different dimensions. We can therefore use a smaller value to facilitate clustering of smaller vehicles. Post-processing has been used (connected components) to generate tracks. These accurate tracks can further be used for Motif analysis at traffic junctions without involving complex modeling as adopted in [10]. Our proposed clustering can be used to cluster trajectories for high-level information retrieval. The clusters with more number of tracks can form the frequently occurring trajectory patterns (motifs).

(a) Frame marked with motion information
(b) Clusters formed on the frame
(c) Traces of the three objects
Fig. 14: Detection and tracking of three cars in MIT traffic dataset video.
(a) Frame marked with motion path
(b) Clusters formed on the frame
(c) Traces of the three objects
Fig. 15: Detection and tracking of moving objects in UCF traffic dataset video. The snapshot is on 50th frame where two signals started and corresponding traffic flows are marked.

Iii-G Experiments on Crowd Datasets

We have also tested our model on human motion analysis. Our proposed framework can track individuals when they are spatially apart in video frames. Results of such experiments on MIT and UCF crowd datasets are presented in Fig. 17 and Fig. 16. As we consider spatial closeness in the form of distance function, we are able to cluster individuals moving on road with reasonably good accuracy. This indicates that the model can be employed on other objects as long as the objects are not changing their shapes in temporal domain significantly between successive frames. The model can be used to detect abnormal movements of pedestrians while crossing roads via zebra lines, or they are coming on the way of vehicular traffic. Fig. 16 shows two pedestrians walking on a designated lane. Even though they are in close proximity toward the end of their paths, our model was successful in tracking them correctly.

(a) Frame marked with motion information
(b) Clusters formed on the frame
(c) Traces of the five clusters
Fig. 16: MIT pedestrian movement detection and tracking. It can be observed that cluster label 21 corresponds to a group of people crossing the zebra line. The Trace for 52 is formed from 21 when the group split into two clusters.
(a) Frame marked with motion trace of pedestrians
(b) Clusters formed on the frame
(c) Traces of the 4 pedestrians
Fig. 17: Detection and tracking of 4 pedestrians. It can be observed that the model was able to discriminate between object 3 and 16 even in close proximity while crossing each other and the traces are maintained throughout the entire episode.

Iii-H Overall Comparison

Our model does clustering and tracking together. KLT tracker [35]

is an algorithm for feature tracking, even though specific methods can be used for object tracking. Our model can cluster the number of moving objects non-parametrically unlike K-means 

[16] that needs to specify the number of objects and applies a best fit strategy for the objects. Mean-shift clustering [36] which is strictly non-parametric is well suited for clustering. However, it takes multiple iterations to converge. Thus tracking using mean-shift needs special association algorithm to correlate the objects between frames.

However, our model does not produce a crisp boundary of the object, rather gives the area of object motion. In normal circumstances, even a human observer looking at a traffic scene may not be always looking at car details like the model or number plate. Rather the human observer may be interested in such details whenever something unusual happens. Since our proposed model provides the patch of the object, with adequate post-processing, finer details of the objects can be obtained. In terms of computational overhead, there is no algorithm that runs in lesser time than . Hence in real-time applications, our algorithm is best suited. The model is strictly hierarchical in a sense that we are building pixels to clusters to trajectories. These trajectories can further be used for finding most frequently used patterns as done using a complex VLTAMM model proposed in [10]. Our model can find the frequently used paths incrementally without any need of the whole video [10]. A summary of comparisons with state-of-the-art algorithms is presented in Table I, where denotes the number of iterations specified.

angle=50,lap=0pt-(1em)Unsupervised angle=50,lap=0pt-(1em)Non-parametric angle=50,lap=0pt-(1em)Variable sized/shaped clustering angle=50,lap=0pt-(1em)Temporal clustering angle=50,lap=0pt-(1em)Embedded Tracking angle=50,lap=0pt-(1em)Online-Classification angle=50,lap=0pt-(1em)Abnormality Detection angle=50,lap=0pt-(1em)Incremental angle=50,lap=0pt-(1em)Complexity
Mean-Shift O()
K-means O()

Tracking performance has been compared against KLT, mean-shift, TLD [18] and KCF [13] algorithms and the results are shown in Fig. 18. It has been observed that, mean-shift looses tracks when bigger region-of-interest (ROI) is given for tracking. Even it looses the track when the objects reach near the boundary. However, KLT is able to track individual points accurately as it tracks the feature points till the end. Though TLD and KCF algorithms are able to track the objects, however, they require ROI initialization for successful tracking. On the other hand, our algorithm automatically detects the ROI in each frame and temporal association is obtained. TLD tries to match the objects even after they exit the scene. Our model also tracks the object as long as they remain within the scene and are not fully occluded.

(a) KLT
(b) Meanshift
(c) TLD
(d) KCF
(e) TUIC
Fig. 18: (a) KLT tracker being a feature tracker gets affected by occlusion. Three feature points get stopped at the traffic light. A new point is wrongly added to the car. (b) As Mean-shift tracking does not include tracking features, some heuristic needs to be used for associating objects between consecutive frames. After choosing a region in the video for tracking, it has been found that the tracking stops towards the boundary. (c) TLD finds matching regions even after object exits the scene. (d) KCF tracks the object throughout its presence in the scene. (e) Our proposed TUIC model also tracks the object throughout its life time as can be verified from the figure without needing any initialization.

Iii-I Analysis of Computational Complexity

Our algorithm initializes all observations () to the background cluster . During clustering, each observation is unassigned exactly once and checked against each of the alive clusters to find the probability association with one of the clusters. Thus, if a frame has pixels with motion constituting objects, the worst case complexity of the clustering is . Under normal circumstances, the will be much smaller than , hence the complexity can be approximated by . It has been observed that, on videos with frame dimension 120 x 213 (approximately or more observations) and with number of clusters between to , our algorithm takes approximately - ms per frame, when tested on a machine with i5 processor having 4GB or memory. Therefore, we can process all 25 frames within one second for a video recorded at 25 fps. Thus, the proposed model can be used for real-time applications.

Iii-J Limitations

Even though post-processing can be done to join the connected clusters to handle objects of varying sizes, the method without post-processing can be primarily used for tracking objects of similar size. The model can track objects with partial occlusion, the model is not designed to handle full occlusions. However, this provides the base for modeling the lifetime of a cluster. Another issue is, we have used Euclidean distance as the measure to find the distance of an observation from a cluster center. Many of the real life objects like vehicles follow elliptical shape distribution. Therefore, taking maximum Euclidean distance may group pixels belonging to different vehicles moving in similar direction to a single cluster. However, if we take smaller corresponding to the width of the smallest vehicle, the issue can be solved as the vehicles can be separated using connected component analysis.

Iii-K Summary

It is found that the algorithm, when applied on vehicle data set, was able to label the clusters as well as track the clusters across different frames with single iteration of Gibbs sampling. In addition, experiments have been carried out on other objects (Human) and it has been found that the proposed model is able to cluster the objects and track them successfully, thus forms a perfect model for traffic analysis. Performance of the clustering algorithm has been tested and the results reveal that the model can be used for real-time object tracking. The method described for finding the concentration parameter gives a different perspective of the well-known Dirichlet Process [11].

Iv Conclusion

This paper introduces an object model from DPMM with a new perspective using a distance measure. The model is temporally extended to consider spatial as well as temporal aspects of moving objects. An incremental approach has been used to build objects from pixels in a hierarchical way without needing to have a prior or the number of clusters. The model has been validated on a wide range of video datasets. The proposed model is able to cluster pixels corresponding to objects and thus can be used to track objects as long as they remain in motion even with partial occlusion. Our model can be applied to videos for building real-time traffic analysis framework as it can learn the segments hierarchically and non-parametrically.

We foresee room for improvement at different levels. Firstly, our model assumes the videos to be shot from the top view. However, most of the videos are not shot accordingly. Secondly, we cannot assume the objects to have fixed dimensions in real life scenario. In such cases, the concentration parameter needs to be automatically learned for each object. Lastly, since we have used optical flow for clustering, further processing may be needed for better object identification as optical flow does not give crisp boundaries of objects. The method in turn can be extended to hierarchically find out most frequently traveled segments of a road.


  • [1] V. Bastani, L. Marcenaro, and C. Regazzoni.

    Unsupervised trajectory pattern classification using hierarchical dirichlet process mixture hidden markov model.


    2014 IEEE International Workshop on Machine Learning for Signal Processing

    , pages 1–6, Sept 2014.
  • [2] V. Bastani, L. Marcenaro, and C. Regazzoni. A particle filter based sequential trajectory classifier for behavior analysis in video surveillance. In 2015 IEEE International Conference on Image Processing, pages 3690–3694, Sept 2015.
  • [3] V. Bastani, L. Marcenaro, and C. S. Regazzoni. Online nonparametric bayesian activity mining and analysis from surveillance video. IEEE Transactions on Image Processing, 25(5):2089–2102, May 2016.
  • [4] D. M. Blei and P. I. Frazier. Distance dependent chinese restaurant processes. J. Mach. Learn. Res., 12:2461–2488, November 2011.
  • [5] D. M. Blei and M. I. Jordan. Variational inference for dirichlet process mixtures. Bayesian Anal., 1(1):121–143, 03 2006.
  • [6] F. Castaldo, F. A. N. Palmieri, V. Bastani, L. Marcenaro, and C. Regazzoni.

    Abnormal vessel behavior detection in port areas based on dynamic bayesian networks.

    In 2014 17th International Conference on Information Fusion, pages 1–7, July 2014.
  • [7] J. M. Chaquet, E. J. Carmona, and A. Fernández-Caballero. A survey of video datasets for human action and activity recognition. Comput. Vis. Image Underst., 117(6):633–659, June 2013.
  • [8] D. Comaniciu, V. Ramesh, and P. Meer. Real-time tracking of non-rigid objects using mean shift. In Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, volume 2, pages 142–149. IEEE, 2000.
  • [9] A. Dore and C. Regazzoni. Interaction analysis with a bayesian trajectory model. IEEE Intelligent Systems, 25(3):32–40, May 2010.
  • [10] R. Emonet, J. Varadarajan, and J. M. Odobez. Temporal analysis of motif mixtures using dirichlet processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(1):140–156, Jan 2014.
  • [11] G. Farnebäck. Two-Frame Motion Estimation Based on Polynomial Expansion, pages 363–370. Springer Berlin Heidelberg, Berlin, Heidelberg, 2003.
  • [12] T. S. Ferguson. A bayesian analysis of some nonparametric problems. Ann. Statist., 1(2):209–230, 03 1973.
  • [13] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):583–596, March 2015.
  • [14] W. Hu, X. Li, G. Tian, S. Maybank, and Z. Zhang. An incremental dpmm-based method for trajectory clustering, modeling, and retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(5):1051–1065, May 2013.
  • [15] F. Jiang, Y. Wu, and A. K. Katsaggelos.

    A dynamic hierarchical clustering method for trajectory-based unusual video event detection.

    IEEE Transactions on Image Processing, 18(4):907–913, April 2009.
  • [16] X. Jin and J. Han. K-Means Clustering, pages 563–564. Springer US, Boston, MA, 2010.
  • [17] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Mach. Learn., 37(2):183–233, November 1999.
  • [18] Z. Kalal, K. Mikolajczyk, and J. Matas. Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(7):1409–1422, July 2012.
  • [19] K. Kim, D. Lee, and I. Essa. Gaussian process regression flow for analysis of motion trajectories. In 2011 International Conference on Computer Vision, pages 1164–1171, Nov 2011.
  • [20] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning. The MIT Press, 2009.
  • [21] D. Kuettel, M. D. Breitenstein, L. Van Gool, and V. Ferrari. What’s going on? discovering spatio-temporal dependencies in dynamic scenes. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 1951–1958, June 2010.
  • [22] R. Laxhammar and G. Falkman.

    Online learning and sequential anomaly detection in trajectories.

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(6):1158–1173, June 2014.
  • [23] L. Marcenaro, L. Marchesotti, and C.S. Regazzoni. Self-organizing shape description for tracking and classifying multiple interacting objects. Image and Vision Computing, 24(11):1179 – 1191, 2006.
  • [24] B. T. Morris and M. M. Trivedi. Trajectory learning for activity understanding: Unsupervised, multilevel, and long-term adaptive approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(11):2287–2301, Nov 2011.
  • [25] B. T. Morris and M. M. Trivedi. Understanding vehicular traffic behavior from video: a survey of unsupervised approaches. Journal of Electronic Imaging, 22(4):041113–041113, 2013.
  • [26] J. C. Nascimento, M. A. T. Figueiredo, and J. S. Marques. Trajectory classification using switched dynamical hidden markov models. IEEE Transactions on Image Processing, 19(5):1338–1348, May 2010.
  • [27] J. C. Nascimento, M. A. T. Figueiredo, and J. S. Marques. Activity recognition using a mixture of vector fields. IEEE Transactions on Image Processing, 22(5):1712–1725, May 2013.
  • [28] T. Nawaz, A. Cavallaro, and B. Rinner. Trajectory clustering for motion pattern extraction in aerial videos. In 2014 IEEE International Conference on Image Processing, pages 1016–1020, Oct 2014.
  • [29] R. M. Neal. Markov chain sampling methods for dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2):249–265, 2000.
  • [30] W. Neiswanger, F. Wood, and E. P. Xing. The dependent dirichlet process mixture of objects for detection-free tracking and object modeling. In

    Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, April 22-25, 2014

    , pages 660–668, 2014.
  • [31] C. Piciarelli, C. Micheloni, and G. L. Foresti. Trajectory-based anomalous event detection. IEEE Transactions on Circuits and Systems for Video Technology, 18(11):1544–1554, Nov 2008.
  • [32] C. E. Rasmussen.

    The infinite gaussian mixture model.

    In S. A. Solla, T. K. Leen, and K. Müller, editors, Advances in Neural Information Processing Systems 12, pages 554–560. MIT Press, 2000.
  • [33] I. Saleemi, L. Hartung, and M. Shah. Scene understanding by statistical modeling of motion patterns. In Computer Vision and Pattern Recognition, 2010 IEEE Conference on, pages 2069–2076, June 2010.
  • [34] X. Sun, N. H. C. Yung, and E. Y. Lam. Unsupervised tracking with the doubly stochastic dirichlet process mixture model. IEEE Transactions on Intelligent Transportation Systems, 17(9):2594–2599, Sept 2016.
  • [35] C. Tomasi and T. Kanade. Detection and tracking of point features. School of Computer Science, Carnegie Mellon Univ. Pittsburgh, 1991.
  • [36] C. Yizong. Mean shift, mode seeking, and clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(8):790–799, Aug 1995.