Understanding Crowd Flow Movements Using Active-Langevin Model

03/12/2020 ∙ by Shreetam Behera, et al. ∙ Indian Institute of Technology Bhubaneswar IIT Roorkee 9

Crowd flow describes the elementary group behavior of crowds. Understanding the dynamics behind these movements can help to identify various abnormalities in crowds. However, developing a crowd model describing these flows is a challenging task. In this paper, a physics-based model is proposed to describe the movements in dense crowds. The crowd model is based on active Langevin equation where the motion points are assumed to be similar to active colloidal particles in fluids. The model is further augmented with computer-vision techniques to segment both linear and non-linear motion flows in a dense crowd. The evaluation of the active Langevin equation-based crowd segmentation has been done on publicly available crowd videos and on our own videos. The proposed method is able to segment the flow with lesser optical flow error and better accuracy in comparison to existing state-of-the-art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In nature, collective behavior is one of the fundamental characteristics of different living organisms from bacteria to humans. Every collective movement of living organisms exhibit typical behavioral patterns indicating specific activities. For example, birds flock up together to drive out individuals of other species. Similarly, ants swarm together to drive larger pieces of food particles to their nests. In humans, collective motion can be seen in social events like rallies, parade, sports event, fairs and festivals. Understanding this kind of collective behavior can explain the cause of untoward incidents like stampede or other incidents that often cause loss of life and property. Researchers across various domains shown their interests in understanding the group behavior in humans [2], thus making this as an inter-disciplinary field of research. In this paper, it has been shown how the amalgamation of physics-based method and computer vision techniques can be used to segment motion flows in densely crowded videos. The motion flows provide important cues about the crowd behavior that can be used for building systems to prevent crowd disasters.

I-a Related Work

Crowd motion flows can be very instrumental in describing group behavior in humans. Several literatures have explained crowd motion behavior on basis of physics and biological-inspired models [1, 2]. However, majority of such works primarily focus on physics-inspired models for crowd behavior analysis. These methods claim that the physics-based models can capture most of the dynamics of crowd motions. In physics-inspired models, the dominant flow in the crowd can be considered analogous to fluid. In an another approach, sparse human crowd can be considered similar to motion of gases [2]. Therefore, it is believed that the theory of fluid dynamics, statistical thermodynamics, concept of Brownian motion and many other physical concepts can only be employed after certain relaxations in the root models. In [3], Hughes pointed out the similarities between physics and the actual crowd. He described the crowd as a component of fluids from the physic’s perspective. However, the concept is complex as interactions between individuals is far complicated than fluid particle interactions. Similarly, Vicsek et al. [4] have developed force-based models to describe collective motion in crowd. The collective motion is described in terms of velocity, orientation, correlation function and fluid dynamics. Helbing et al. [5] have developed a physical model that uses density and pressure quantities to mark turbulent flows and stop-and-go phenomena in crowd. In [6], the authors categorized crowd models as microscopic, mesoscopic and macroscopic. The macroscopic model considers crowd as a single unit comprising of all individuals. In the microscopic model, a person is considered as a fundamental unit of the crowd. The mesoscopic model is a combination of both macroscopic and microscopic model. However, the difference between the former two models is not crisp. In [7], Johansson et al. have discussed about different dynamics of crowd that lead to various crowd safety issues. The authors in [8] have used simple set of rules of interactions between neighboring particles in order to explain collective behavior with a special focus on group intelligence in human crowd. In [9]

, the authors have proposed a Particle Swarm Optimization model (PSO) to simulate crowd. The authors in

[10] presented a physical modeling framework that describes the intelligent, non-local, and anisotropic behavior of pedestrians. In [11], the Lagrangian (moving) coordinate system has been used for simulation and modeling of crowd flows. The authors in [12] have developed a multi-class continuum modeling based on social force model to simulate bilinear crowd flows. The authors in [13] have developed a dynamic variant of Vicsek model to study collective motion in panicked human crowds.

Existing literatures [1, 2, 14] suggest that computer vision-based crowd behavior has been taken as an active area of research. Automating the process using computer vision approaches results in better information fusion, thus leading to better accuracy and most importantly less error because of limited human interference [2, 14]. It has been reported that physics-based models can describe crowd well and coalescence of these models along with computer vision techniques can solve several crowd-related problems.

Ali et al. in [15] have considered human crowd as a fluid and developed a Lagrangian-based fluid dynamics framework for segmenting crowd flows in videos. The framework is also able to find out the instabilities of the crowd flows. However, the model developed is too complex and cannot handle dynamic background issues. A social force model-based method has been proposed in [16] to detect crowd anomaly at pixel and block levels. The authors in [17] have developed an algorithm to detect the sink modes and similar flow regions with similar physical motion patterns in the crowd. Mehran et al. [18] have analyzed the motion flows by combining social force graph technique and streaklines in the crowd. A scene-structure-based force model is proposed in [19] to detect individuals in high-density crowd by analyzing its static, dynamic, and boundary floor fields in videos. In [20], crowd is represented by fluid flow using a Lagrangian system and it uses streaklines in combination with potential functions used for segmentation as well as for abnormal behavior detection. The authors in [21] have analyzed the crowd behavior on the basis of bilinear interaction of curl and divergence of the motion flows. A spatio-temporal driving force-based group segementation scheme has been proposed in [22]. However, the model lacks view variant and its parameters need to be adjusted when the view changes. An adaptive human motion analysis and prediction method for understanding the motion patterns has been proposed in [23]. The method explained in [24] identifies multiple crowd behaviors by performing stability analysis for dynamical systems, thus avoiding object detection, tracking, and training. However, their method fails to capture the randomness in the crowd. The authors in [25] have represented the crowd flow as a spatio-temporal viscous fluid field and proposed a method based on appearance and driven factor perspective to recognize the crowd behavior at a large scale. A density independent hydrodynamics model (DIHM) has been proposed in [26] to detect coherence regions in crowded scenes with ability to handle varying crowd density over time. However, the method does not segments well at finer level. In [27], texture-based method can only be used to represent crowd motion flows and background with varying textures. These textures of flow regions are used for people counting. The method proposed in [28] detects coherent regions in a crowded scene using a thermal diffusion model and time-series clustering. However, the method is not robust as coherent regions are lost when the motion and non-motion regions are merged over time. The authors in [29] have proposed a region growing segmentation scheme based on the translational domain for segmenting crowd flows. However, the method fails if the translational flow related to crowd regions is not local. The authors in [30] have developed a real-time agent-based model to understand crowd behavior on the basis of group dynamics and agent-based personality traits. However, the performance degrades when the number of agents increases. Zhou et al. [31] have represented the crowd collective behavioral patterns as a mixed model of dynamic pedestrian-agents. Since the model is a microscopic model, it fails to handle varying crowd density. In [32], the authors segment the collective regions in a crowd and measure the order of collectiveness of such regions. Fradi et al. [33] have developed local descriptors to obtain semantic information and interactive sparse crowd behaviors in crowded scenarios. However, it is not clear how the method handles dense crowd. The technique proposed in [34] is an agent-based model that monitors and predicts evacuation ways in emergency situations. Though the results are good but the technique is computationally intensive. In [35], the motion trajectories have been analyzed using curl and divergence properties to identify different crowd movements. Lim et al. [36] have developed a method to detect salient regions and instabilities in crowd by considering crowd as a dynamic system. In [37], the authors have proposed a Langevin-based force model to segment crowd flows in dense crowded scenarios. However, the proposed method segments only linear crowd flow regions and the force model is based out of passive system of particles. The authors in [38] and [39]

have used Convolutional Neural Networks (CNN) to perform large scale crowd analysis. However, a large volume of labeled data is required for training whose preparation is a cumbersome task. A sparse representation-based scheme is proposed in

[40] for detecting anomalies in crowded scenes. Chaker et al. in [41]

have modeled the crowd using social force model and used an unsupervised approach for crowd anomaly detection. The authors in

[42] have considered the crowd motion flow as a Conditional Random Field to segment crowd motion flows in the videos. However, the method is not robust enough to handle to intersecting flows. In [43]

, a dynamic mixture model of textures and expected-maximization (EM) algorithm are used to segment motion in traffic and crowd videos.

I-B Motivation and Contributions

It has been discussed in the previous section that majority of the existing crowd analysis frameworks fail to describe the random movements in crowd. The authors in [13] have described the crowd model in terms of modified Vicsek model. However, the model force components are not clearly explained with respect to crowd dynamics. Even though the authors in [37] have described that the randomness of the crowd can be considered similar to Brownian motion of particles in fluid using a Langevin model, however, the model captures only linear flows. The model has been developed on the basis of passive system of particles where the particle’s drift motion and confinement take place due to the random forces. The model does not explain the nature of the flows when the particles are part of an active system, i.e., when they have self-propelling energy to propagate in the fluid. Moreover, there are no literature available that describes the crowd motion using active Langevin model despite the same has been used in protein dynamics [44] and slurry dynamics [45]. This becomes a motivation for the present work to develop an active Langevin force model for crowd analysis. In [45] and [46], the Viscek model have been used for slurry dynamics to understand the transitions in confined dense active-particle systems. This has motivated us to develop the present method by combining active Langevin force and other components of the force to obtain a model that can segment linear and non-linear motions in crowd. In this line, following research contributions have been made:

  • Formulation of a force model based on the active Langevin equation to understand flows in dense crowd scenarios by assuming motion points to be analogous to self-driving particles in colloidal solutions.

  • The above force model is then used with vision-based algorithm to segment linear and non-linear flows in dense crowd videos.

Rest of the paper is organized as follows. The underlying principle of Langevin equation is explained in Section II. In Section III, the proposed method is discussed. The results are presented in Section IV. Section V concludes the paper and discusses a few possible future scopes in this research area.

Ii Preliminary of Langevin Equation

The random motion of a small particle (micron size) immersed in a fluid is known as Brownian motion. Early studies on this phenomenon are based on pollen grains, dust particles, and various other colloidal sized objects [47, 48]. Later on the theory of Brownian motion have been applied sucessfully to other phenomena [44]. The fundamental equation based on Newtonian motion, which describes Brownian motion successfully, is known as the Langevin equation. This equation comprises of frictional forces and random forces. These forces are related to each others by the fluctuation-dissipation theorem. Considering the motion of spherical particle with mass and velocity in a fluid medium with viscosity , (1) describes the Newton’s equation of motion for the particle.

(1)

where represents the overall instantaneous force experienced by the particle at time instant .

The origin of this force is due to the interaction of the Brownian particle with the surrounding particles present in the medium. It is really hard to get an exact expression of . However, frictional force primarily dominates , which is proportional to the velocity of the Brownian particle. According to Stokes law, friction coefficient can be computed as presented in (2).

(2)

By substituting in (1) with the frictional force, the Newton’s equation of motion can be expressed now as represented in (3),

(3)

whose solution can be expressed in (4). Accordingly the velocity of the Brownian particle should decay to zero at longer time intervals. However, at thermal equilibrium at room temperature (), the mean-squared velocity of the Brownian particle is . This indicates that the needs to be modified. The randomness of the trajectory of an individual particle indicates the existence of an additional random or fluctuating force . Thus, the equation of motion is modified and described as in (5).

(4)
(5)

The friction and noise both arise due to the interaction of the Brownian particle with its environment. The noise can be considered as a fluctuating force whose basic nature is given by its first and second moments as represented in (

6),

(6)

where is a measure of the strength of fluctuating force. There is no correlation between two distinct impacts occurring in two distinct intervals which is indicated by the delta function in time (). With the above properties, (5) can be solved for mean-squared velocity as presented in (7),

(7)

Over longer time intervals, the exponential terms in (7) drop out and it confines to . This ensures its equilibrium value to be such that

(8)

The above equation is known as fluctuating-dissipation theorem that establishes the relationship between the strength of the random force () with the magnitude of the frictional force (). It represents the trade-off between the friction () that tries to push the system to a completely ”dead” state and the fluctuating force or noise force strives to keep the system ”alive”. This condition is necessary to maintain thermal equilibrium state at longer time intervals. So far, the above discussion is limited for a free non-interacting Brownian particle. For confined (like Harmonic potential ) and interacting Brownian particles (like Lennard-Jones potential ), (5) can be modified further as expressed in (9),

(9)

where is the conservative force that can be expressed in terms of the potentials mentioned above ( or ).

The equation mentioned in (9

) describes about passive Brownian system that should be in equilibrium over longer time intervals as the component forces try to balance out each others producing a unique stationary state given by Maxwell-Boltzmann distribution. All these types of undirected motions are considered as passive motion. This is because Brownian particle does not participate actively in this motion. On the other hand, active

111The term ”active” implies that the individuals particles or units move acquiring energy from the environment. motion of Brownian particles depend on the supply of energy. In biological systems, this kind of active self-driven Brownian motion can be observed at different scales, ranging from cells [6] or simple micro-organisms to higher scale organisms like bird or fish. Finally human crowd movement can be demonstrated as active Brownian motion. Such type of motions can be applied to confined systems of particles (like human traffic flow in two dimension) that performs collective motion under far from equilibrium conditions. Now, one major question arises. How the known picture of passive motion mentioned in (9) needs to be modified to incorporate self or internal ”activity” of the particles? Here, the main assumption is, there is additional inflow of energy that causes active motion and can be represented practically by negative dissipation in the direction of motion. Thus, it can be modeled by negative friction , i.e., represented as a function of position and velocity. Usually this kind of systems are far from equilibrium. The negative friction force does not obey fluctuation-dissipation relationship implying the system is considered as homogeneous in space implying

. This is considered as a frictional force applied to the component of the motion in the direction of the particle connecting vectors that helps particle moving together in the same direction. Thus, (

9) is modified below and is represented as given in (10),

(10)

where . is the combination of confinement force and particle interaction force , respectively.

Iii Proposed Crowd Flow Segmentation

This section describes the proposed algorithm that aimed at segmenting flows in densely crowded scenarios.

Iii-a Keypoint Extraction

The proposed algorithm works on a temporal window of size, say frames. Over the first two frames, absolute difference is computed. This step retains all motion regions in the scene. Next, Features from Accelerated Segment Test (FAST) detector [49] is applied to detect important keypoints in the crowded scene. Applying FAST detector to difference image has two advantages. Firstly, it retains only the motile points. Secondly, computations are performed only on the keypoints reducing the computational time. These keypoints are then fed to the Lucas Kanade Optical flow process [50] for tracking in the subsequent frames within the window . The magnitudes of the keypoints are calculated using (11) and (12), respectively. The detailed implementation is explained in the Algorithm 1.

(11)
(12)

Input: = First two frames of the a Temporal Window .
Output: = Set of keypoints with as features of each keypoint in .

1:Compute absolute difference image =
2:Compute FAST keypoints () on the image.
3:Calculate using Lucas Kanade Optical Flow method.
4:Calculate using (11) and (12).
5:Compute Q by quantizing into bins in the range of -.
Algorithm 1 Keypoint extraction

Iii-B Active Langevin Force Model

Initially, the formulation of active Langevin model is discussed. The detected keypoints as discussed in the previous section, are considered to be in motion that constitute the overall flow in a crowd. The motion keypoints can be considered as self-propelling particles as in active systems moving with certain drift energy. Similar to [37], the inertial force () constitutes of three different forces as given in (13).

(13)

The first term in the right hand side of (13) represents the viscous force similar to friction force in (9). The second force () represents the combination of interaction potential resulted due to the interaction among the particles and the drift force responsible for self-driving of the particle as mentioned in (15). The third term is a random force resulted because of random noise and disturbances. Now, the reformulated Langevin force equation for the particle in dimensions can be presented as in (14),

(14)

where is the mass of the particle, represents the velocity of the particle in direction, represents the viscosity coefficient, and the represents a random force. The term further consists of interaction potential and drift force as represented in (15),

(15)

where represents interaction potential, is the force potential, and represents the drift force experienced by the particle.

Iii-C Flow Segmentation Method

The crowd movements are considered as translational movements. Thus, the value of is assumed to be . Equation (14) is further solved w.r.t. change in time () to compute the corresponding velocities and positions of the particle in the next time frame.

(16)

The above equation in (16) represents the predicted velocity of the particle in the next time frame. Similarly, the position of the particle is computed as mentioned in (17),

(17)

where t is the increment in time. In the above equations, the mass of each particle is set to unity for consistency and since the operations are performed in consecutive frames, is taken as unity. The forces mentioned in (13) can be computed as follows:

  • Estimation of Viscous Force: The viscous force can be calculated as the product of the particle velocity and viscosity of the particle and its neighbors. Viscosity is calculated as mentioned below in (18),

    (18)

    where represents the distance between the two particles, represents the position of the particle, position of the neighbor of the particle, and represents the total number of neighbors surrounding the considered particle. Thus, the viscous force is calculated as (19).

    (19)

  • Estimation of Active Force: As mentioned in (15), this force has two parts namely the particle interaction potential and the drift force. The interaction potential can be considered as the average interactions of the particle with its neighbors as presented in (20),

    (20)

    where represents the interaction coefficient known as coordination coefficient that happens due to the interactions of the particle and its neighbors as presented in (21), is the average particle velocity and is the relative velocity,

    (21)

    where is represented as in (22), and is expressed as in (23).

    (22)
    (23)

    In the aforementioned formulation, represents the radius upto which the potential influence is experienced, represents the number of neighbors across the particle, and is represented as the Gaussian weight function as described in (24).

    (24)

    The drift force can now be calculated as given in (25),

    (25)

    where is the self-propelling coefficient. The sum of and constitutes the active force .

  • Estimation of Random Force: This force is taken as the force generated randomly at any point of time due to disturbances.

The active Langevin equation model is applied to each and every keypoints in order to obtain velocity and position across and -axes, respectively for next frame in the window . Again the keypoints obtained for the current frame are used to compute the keypoints in the next frame within the window and the process continues till the last frame of the window. The flow segmentation process is explained in Algorithm (2) and illustrated in the Figure (1), respectively.

Figure 1: Block diagram representing proposed crowd flow segmentation using active Langevin model. Inside the red dotted box, the keypoint extraction scheme is shown. For a temporal window , the first two frames are used for keypoint extraction. These keypoints are then used to segment the crowd motion flows in the remaining frames of the window using active Langevin model.

Input: = Video sequence with number of frames, = Size of Window, , = Quantization bins.
Output: = Motion flow segmented maps, where =

1:Initialize  = .
2:for i = 1 to m do
3:       , where
4:       Extract keypoints using algorithm (1).
5:       for j = 3 to  do
6:             Using (16) and (17), estimate the new velocities and positions of the particles present in .        
Algorithm 2 Crowd flow segmentation using active Langevin Model

Iv Results and Discussions

This section discusses about the datasets used for evaluation of the proposed scheme, followed by experiments related to parameter estimation of the force model.

Iv-a Datasets

In this work, two datasets have been used for the evaluation of the proposed method. One of the datasets is publicly available [15] and second one is our own dataset containing video recordings of Rath Yatra (Cart Festival) that happens each year in India at Puri in the state of Odisha. From these datasets, a few videos with varying crowd densities have been selected for experimentation. The details of the videos from the two datasets are presented in Table I.

#Dataset Types of Motion
Significant Crowd
behavior
Marathon-I [15]
Linear, unidirectional
crowd movements
People running
in one direction
Marathon-III [15]
Non-Linear, multidirectional
crowd movements
People running
in elliptical path
Fair [15]
Bilinear, mixing
crowd movements
People moving in
two different directions
Rath Yatra-I
Linear, mixing
crowd movements
People pulling
the cart in one direction
Table I: Videos from the two datasets used for evaluation of the proposed method

Iv-B Parameter Estimation

The equations (16) and (17) described earlier have parameters such as viscosity coefficient (), coordination coefficient () and self-propelling coefficient(). The viscosity coefficient and coordination coefficient are calculated during the segmentation process itself. However, self-propelling coefficient needs to be given as input. Therefore, an experiment has been conducted to find optimal value of with respect to average optical flow error generated during the process. The experiment has been carried out on various videos with different movements. Videos with linear, non-linear, and crowd mixing movements have been considered. For each value of , the normalized average optical flow error per frame is obtained. For each video, with minimum error is chosen. In order to obtain a uniform value of for all videos, the average value of all chosen minimum is considered as the final value of . The graphs associated with this experiment are illustrated in Figure 2 and the minimum values of are presented in Table II. The average value has been found to be that has been kept fixed for all other videos.

#Videos
Fair 0.6
Marathon-I 0.4
Marathon-III 0.5
Average 0.5
Table II: Minimum values of for different videos obtained from the graphs displayed in Figure 2
Figure 2: Graph showing how average optical flow error varies with respect to self-propelling coefficient ().

Iv-C Segmentation Results

In this section, the segmentation results obtained using the proposed flow segmentation method and comparisons with latest physics-based model for crowd segmentation, are discussed. The obtained segmentation maps are compared with the ground truths. The ground truths have been prepared by manually marking the significant flow regions in the frames of the video. The comparisons are done with the recent physics-based model for collective motion in crowds [13], [46] and along with the hydrodynamics-based model proposed in [26], respectively. Intersection over Union (IoU) also known as the Jaccard’s coefficient has been used for evaluation of segmented maps with respect to ground maps as represented in (26),

(26)

where is the segmented image and is the ground truth image.

In Marathon-I video, all people are running in one direction indicating it comprises of a unidirectional flow. The force models described in [13] and [46] have been implemented for comparisions. It has been observed that the exisitng models fail to accurately compute the flow vectors in terms of position and velocity. These can be observed in Figures (2(g)-2(i)) and (2(j)-2(l)), respectively. On the contrary, the proposed segmentation scheme is able to compute the positions and velocities of the motion particles and thus segmenting the flow with better accuracy. The proposed method also outperforms the method proposed in [26], where a hydrodynamics-based force model is used for segmentation. In the outputs generated by the hydrodynamics-based model in Figures (3(m)-3(o)), there are significant numbers of false-positives leading to poor accuracy as can be seen in the graphs shown in Figure6(a).

(a) a
(b) b
(c) c
(d) d
(e) e
(f) f
(g) g
(h) h
(i) i
(j) j
(k) k
(l) l
(m) m
(n) n
(o) o
(p) p
(q) q
(r) r
Figure 3: (-) Original Frames (16-18) of the Marathon-I video, (-) Ground Truth Frames, (-) represent segmented outputs obtained using method proposed in [46], (-) represent segmentation outputs of method [13], (-) represent segmentation outputs of method [26], and (-) represent segmentation outputs of the proposed method, respectively. (Best viewed in color)

The Marathon-III video has elliptical motion. However, the flow comprises of four directions indicated by different colors as seen in the ground-truth images in Figures (3(d)-3(f)). The proposed method is able to segment these multi-directional flows with an accuracy that is better than the force models described in [46] and [13]. The hydrodynamics model [26] segments the multi-directional flows. However, there are some over-segmentations that can be observed in Figures (3(m)-3(o)).

(a) a
(b) b
(c) c
(d) d
(e) e
(f) f
(g) g
(h) h
(i) i
(j) j
(k) k
(l) l
(m) m
(n) n
(o) o
(p) p
(q) q
(r) r
Figure 4: (-) Original Frames (61-63) of the Marathon-III video, (-) Ground Truth Frames, (-) represent segmented outputs obtained using method proposed in [46], (-) represent segmentation outputs of method [13], (-) represent segmentation outputs of method [26], and (-) represent segmentation outputs of the proposed method, respectively. (Best viewed in color)

The Fair video is a crowd mixing video with two dominant flows moving in opposite directions. The proposed method is able to segment these flows. However, the force models proposed in [13] and [46] segment them as a unidirectional flow. The hydrodynamics-based model segments these flows with more false-positives.

(a) a
(b) b
(c) c
(d) d
(e) e
(f) f
(g) g
(h) h
(i) i
(j) j
(k) k
(l) l
(m) m
(n) n
(o) o
(p) p
(q) q
(r) r
Figure 5: (-) Original Frames (41-43) of the Fair video, (-) Ground Truth Frames, (-) represent segmented outputs obtained using method proposed in [46], (-) represent segmentation outputs of method [13], (-) represent segmentation outputs of method [26], and (-) represent segmentation outputs of the proposed method, respectively. (Best viewed in color)

In the Rath Yatra video, both crowd mixing and cart pulling event can be observed. The cart pulling is the dominant flow movement in the video. The proposed method is able to segment this dominant flow (in red color) and it is able to segment other flows (in blue color). The force models in [13] and [46] fail to segment these flows. Moreover, the estimated directions are not consistent. The hydrodynamics-based model [26] fails to segment the dominant flows properly. The average frame accuracy for this video using the proposed method has been found to be .

(a) a
(b) b
(c) c
(d) d
(e) e
(f) f
(g) g
(h) h
(i) i
(j) j
(k) k
(l) l
(m) m
(n) n
(o) o
(p) p
(q) q
(r) r
Figure 6: (-) Original recorded Frames (31-33) of the Rath Yatra-I video, (-) Ground Truth Frames, (-) represent segmented outputs obtained using method proposed in [46], (-) represent segmentation outputs of method [13], (-) represent segmentation outputs of method [26], and (-) represent segmentation outputs of the proposed method, respectively. (Best viewed in color)

The average accuracies for all methods are summarized in Table III. The accuracy/frame plots of all methods for various videos are represented in Figure 7.

#Dataset Accuracy (in %)
Proposed Method [46] [13] [26]
Marathon-I 82.89 66.59 69.03 67.71
Marathon-III 93.11 85.54 86.61 90.37
Fair 90.56 86.03 86.23 74.46
Rath Yatra 78.48 68.58 69.42 77.12
Table III: Comparison of the proposed method with state-of-the-art in terms of accuracy
(a)
(b)
(c)
(d)
Figure 7: (-) Frame-wise accuracy plot of various videos for the proposed method, [46], [13] and [26], respectively. (Best viewed in color)

The proposed force model and the force models in [13] and [46] have also been compared with the optical flow baselines. The computed positions and velocities of the particles are compared with the positions and velocities computed using optical flow computed for and frames by computing the average optical flow error between them and plotting them for every frame in the video. The errors for all the models for all videos are plotted in Figure 8. It may be observed that the average optical flow error/frame for the proposed method is lesser as compared to other physics-based models.

Figure 8: Frame-wise average optical error per frame bar graph plot of various videos for the proposed method, [46], and [13], respectively. (Best viewed in color)

V Conclusion and Future Scope

In this work, an approach based on active Langevin equation has been used to understand the motion flows in crowd videos. The active Langevin equation models the motion particles in crowd similar to the colloidal particles moving in the fluid. The segmentation scheme based on this model segments straight line motion as well as curvy motion with notable accuracy. The usage of windowing scheme ensures a significant decrease in the number of computations as optical flow is calculated for two consecutive frames of the window and for the remaining frames, the proposed force model computes the flow positions and velocities to obtain temporal segmentation. In the future, the proposed model can be augmented with machine-learning approaches for identifying and predicting abnormal regions in crowded scenes.

Acknowledgment

The authors are grateful to Science and Engineering Research Board (SERB), Department of Science and Technology, Government of India for funding this research work through the grant YSS/2014/000046.

References

  • [1] V. J. Kok, M. K. Lim, and C. S. Chan, “Crowd behavior analysis: A review where physics meets biology,” Neurocomputing, vol. 177, pp. 342–362, 2016.
  • [2] X. Zhang, Q. Yu, and H. Yu, “Physics inspired methods for crowd video surveillance and analysis: a survey,” IEEE Access, 2018.
  • [3] R. L. Hughes, “The flow of human crowds,” Annual review of fluid mechanics, vol. 35, no. 1, pp. 169–182, 2003.
  • [4] T. Vicsek and A. Zafeiris, “Collective motion,” Physics reports, vol. 517, no. 3-4, pp. 71–140, 2012.
  • [5] D. Helbing, A. Johansson, and H. Z. Al-Abideen, “Dynamics of crowd disasters: An empirical study,” Physical review E, vol. 75, no. 4, p. 046109, 2007.
  • [6] V. Alexiadis, K. Jeannotte, and A. Chandra, “Traffic analysis toolbox volume i: Traffic analysis tools primer,” Tech. Rep., 2004.
  • [7] A. Johansson, D. Helbing, H. Z. Al-Abideen, and S. Al-Bosta, “From crowd dynamics to crowd safety: a video-based analysis,” Advances in Complex Systems, vol. 11, no. 04, pp. 497–527, 2008.
  • [8] L. Fisher, The perfect swarm: The science of complexity in everyday life.   Basic Books, 2009.
  • [9] Y.-y. Lin and Y.-p. Chen, “Crowd control with swarm intelligence,” in

    2007 IEEE Congress on Evolutionary Computation

    .   IEEE, 2007, pp. 3321–3328.
  • [10] L. Bruno, A. Tosin, P. Tricerri, and F. Venuti, “Non-local first-order modelling of crowd dynamics: A multidimensional framework with applications,” Applied Mathematical Modelling, vol. 35, no. 1, pp. 426–445, 2011.
  • [11] F. van Wageningen-Kessels, L. Leclercq, W. Daamen, and S. P. Hoogendoorn, “The lagrangian coordinate system and what it means for two-dimensional crowd flow models,” Physica A: Statistical Mechanics and its Applications, vol. 443, pp. 272–285, 2016.
  • [12] S. P. Hoogendoorn, F. L. van Wageningen-Kessels, W. Daamen, and D. C. Duives, “Continuum modelling of pedestrian flows: From microscopic principles to self-organised macroscopic phenomena,” Physica A: Statistical Mechanics and its Applications, vol. 416, pp. 684–694, 2014.
  • [13] A. Kulkarni, S. P. Thampi, and M. V. Panchagnula, “Sparse game changers restore collective motion in panicked human crowds,” Physical review letters, vol. 122, no. 4, p. 048002, 2019.
  • [14] J. C. S. J. Junior, S. R. Musse, and C. R. Jung, “Crowd analysis using computer vision techniques,” IEEE Signal Processing Magazine, vol. 27, no. 5, pp. 66–77, 2010.
  • [15] S. Ali and M. Shah, “A lagrangian particle dynamics approach for crowd flow segmentation and stability analysis,” in

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2007, pp. 1–6.
  • [16] Q.-G. Ji, R. Chi, and Z.-M. Lu, “Anomaly detection and localisation in the crowd scenes using a block-based social force model,” IET Image Processing, vol. 12, no. 1, pp. 133–137, 2017.
  • [17] S. D. Khan, S. Bandini, S. Basalamah, and G. Vizzari, “Analyzing crowd behavior in naturalistic conditions: Identifying sources and sinks and characterizing main flows,” Neurocomputing, vol. 177, pp. 543–563, 2016.
  • [18] R. Mehran, B. E. Moore, and M. Shah, “A streakline representation of flow in crowded scenes,” in European Conference on Computer Vision.   Springer, 2010, pp. 439–452.
  • [19] S. Ali and M. Shah, “Floor fields for tracking in high density crowd scenes,” in European Conference on Computer Vision.   Springer, 2008, pp. 1–14.
  • [20] X. Wang, X. Yang, X. He, Q. Teng, and M. Gao, “A high accuracy flow segmentation method in crowded scenes based on streakline,” Optik-International Journal for Light and Electron Optics, vol. 125, no. 3, pp. 924–929, 2014.
  • [21] S. Wu, H. Su, H. Yang, S. Zheng, Y. Fan, and Q. Zhou, “Bilinear dynamics for crowd video analysis,” Journal of Visual Communication and Image Representation, vol. 48, pp. 461–470, 2017.
  • [22] R. Li and R. Chellappa, “Group motion segmentation using a spatio-temporal driving force model,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on.   IEEE, 2010, pp. 2038–2045.
  • [23] Z. Chen, L. Wang, and N. H. Yung, “Adaptive human motion analysis and prediction,” Pattern Recognition, vol. 44, no. 12, pp. 2902–2914, 2011.
  • [24] B. Solmaz, B. E. Moore, and M. Shah, “Identifying behaviors in crowd scenes using stability analysis for dynamical systems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 2064–2070, 2012.
  • [25] H. Su, H. Yang, S. Zheng, Y. Fan, and S. Wei, “The large-scale crowd behavior perception based on spatio-temporal viscous fluid field,” IEEE Transactions on Information Forensics and Security, vol. 8, no. 10, pp. 1575–1589, 2013.
  • [26] H. Ullah, M. Uzair, M. Ullah, A. Khan, A. Ahmad, and W. Khan, “Density independent hydrodynamics model for crowd coherency detection,” Neurocomputing, vol. 242, pp. 28–39, 2017.
  • [27] X. Zhang, H. He, S. Cao, and H. Liu, “Flow field texture representation-based motion segmentation for crowd counting,” Machine Vision and Applications, vol. 26, no. 7-8, pp. 871–883, 2015.
  • [28] W. Lin, Y. Mi, W. Wang, J. Wu, J. Wang, and T. Mei, “A diffusion and clustering-based approach for finding coherent motions and understanding crowd scenes,” IEEE Transactions on Image Processing, vol. 25, no. 4, pp. 1674–1687, 2016.
  • [29] S. Wu, Z. Yu, and H.-S. Wong, “Crowd flow segmentation using a novel region growing scheme,” in Pacific-Rim Conference on Multimedia.   Springer, 2009, pp. 898–907.
  • [30] V. Kountouriotis, S. C. Thomopoulos, and Y. Papelis, “An agent-based crowd behaviour model for real time crowd behaviour simulation,” Pattern Recognition Letters, vol. 44, pp. 30–38, 2014.
  • [31] B. Zhou, X. Tang, and X. Wang, “Learning collective crowd behaviors with dynamic pedestrian-agents,” International Journal of Computer Vision, vol. 111, no. 1, pp. 50–68, 2015.
  • [32] B. Zhou, X. Tang, H. Zhang, and X. Wang, “Measuring crowd collectiveness,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 8, pp. 1586–1599, Aug 2014.
  • [33] H. Fradi, B. Luvison, and Q. C. Pham, “Crowd behavior analysis using local mid-level visual descriptors,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 3, pp. 589–602, 2017.
  • [34] B. Basak and S. Gupta, “Developing an agent-based model for pilgrim evacuation using visual intelligence: A case study of ratha yatra at puri,” Computers, Environment and Urban Systems, vol. 64, pp. 118–131, 2017.
  • [35] S. Wu, H. Yang, S. Zheng, H. Su, Y. Fan, and M.-H. Yang, “Crowd behavior analysis via curl and divergence of motion trajectories,” International Journal of Computer Vision, vol. 123, no. 3, pp. 499–519, 2017.
  • [36] M. K. Lim, C. S. Chan, D. Monekosso, and P. Remagnino, “Detection of salient regions in crowded scenes,” Electronics Letters, vol. 50, no. 5, pp. 363–365, 2014.
  • [37] S. Behera, D. Prosad Dogra, M. K. Bandyopadhyay, and P. Pratim Roy, “Estimation of linear motion in dense crowd videos using langevin model,” arXiv preprint arXiv:1904.07233, 2019.
  • [38] L. Cao, X. Zhang, W. Ren, and K. Huang, “Large scale crowd analysis based on convolutional neural network,” Pattern Recognition, vol. 48, no. 10, pp. 3016–3024, 2015.
  • [39] S. Zhou, W. Shen, D. Zeng, M. Fang, Y. Wei, and Z. Zhang, “Spatial–temporal convolutional neural networks for anomaly detection and localization in crowded scenes,” Signal Processing: Image Communication, vol. 47, pp. 358–368, 2016.
  • [40]

    Y. Yuan, J. Wan, and Q. Wang, “Congested scene classification via efficient unsupervised feature learning and density estimation,”

    Pattern Recognition, vol. 56, pp. 159–169, 2016.
  • [41] R. Chaker, Z. Al Aghbari, and I. N. Junejo, “Social network model for crowd anomaly detection and localization,” Pattern Recognition, vol. 61, pp. 266–281, 2017.
  • [42] S. S. Kruthiventi and R. V. Babu, “Crowd flow segmentation in compressed domain using crf,” in International Conference on Image Processing (ICIP).   IEEE, 2015, pp. 3417–3421.
  • [43] A. B. Chan and N. Vasconcelos, “Modeling, clustering, and segmenting video with mixtures of dynamic textures,” IEEE transactions on pattern analysis and machine intelligence, vol. 30, no. 5, pp. 909–926, 2008.
  • [44] F. Schweitzer, Brownian agents and active particles: collective dynamics in the natural and social sciences.   Springer, 2007.
  • [45] P. D. Bonkinpillewar, A. Kulkarni, M. V. Panchagnula, and S. Vedantam, “A novel coupled fluid–particle dem for simulating dense granular slurry dynamics,” Granular Matter, vol. 17, no. 4, pp. 511–521, Aug 2015. [Online]. Available: https://doi.org/10.1007/s10035-015-0572-2
  • [46] P. S. Mahapatra, A. Kulkarni, S. Mathew, M. V. Panchagnula, and S. Vedantam, “Transitions between multiple dynamical states in a confined dense active-particle system,” Physical Review E, vol. 95, no. 6, p. 062610, 2017.
  • [47] W. T. Coffey and Y. P. Kalmykov, The Langevin equation: with applications to stochastic problems in physics, chemistry and electrical engineering.   World Scientific, 2004.
  • [48] P. Langevin, “Sur la théorie du mouvement brownien,” CR Acad. Sci. Paris, vol. 146, pp. 530–533, 1908.
  • [49] E. Rosten, R. Porter, and T. Drummond, “Faster and better: A machine learning approach to corner detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 32, no. 1, pp. 105–119, 2010.
  • [50] B. D. Lucas, T. Kanade et al., “An iterative image registration technique with an application to stereo vision,” 1981.