Automated Lane Detection in Crowds using Proximity Graphs

07/06/2017 ∙ by Stijn Heldens, et al. ∙ Vrije Universiteit Amsterdam University of Twente 0

Studying the behavior of crowds is vital for understanding and predicting human interactions in public areas. Research has shown that, under certain conditions, large groups of people can form collective behavior patterns: local interactions between individuals results in global movements patterns. To detect these patterns in a crowd, we assume each person is carrying an on-body device that acts a local proximity sensor, e.g., smartphone or bluetooth badge, and represent the texture of the crowd as a proximity graph. Our goal is extract information about crowds from these proximity graphs. In this work, we focus on one particular type of pattern: lane formation. We present a formal definition of a lane, proposed a simple probabilistic model that simulates lanes moving through a stationary crowd, and present an automated lane-detection method. Our preliminary results show that our method is able to detect lanes of different shapes and sizes. We see our work as an initial step towards rich pattern recognition using proximity graphs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. Long exposure shot at busy train station reveals lane formation. Photo by David Iliff, CC-BY-SA 3.0 license.

Different studies (see survey by Castellano et al. (Castellano et al., 2009)) have shown that, while the behavior of individuals in public areas is often erratic and unpredictable, the behavior of large crowds as a whole is predictable and can be modeled. Crowd simulation models are plentiful, examples are models based on fluid dynamics (Helbing, 1998; Guo and Huang, 2008), cellular automata (Sarmady et al., 2011), or dynamic systems (Helbing and Molnar, 1995).

Helbing et al. observed that crowds have a tendency to form collective movement patterns (Helbing et al., 2001). The patterns are not globally planned or externally organized, but emerge naturally from the local interactions between individuals. Examples are circulation of flow at intersections, clogging at bottlenecks, and formation of lanes in crowded areas. Automated detection of these patterns is crucial for understanding, analyzing, and predicting the behavior of crowds in large open areas. One can think of a large number of applications (Zhan et al., 2008), for example, improve safety at sport matches, music concerts, or public demonstrations, provide guidelines for urban planners to improve design of public spaces, or automate detection of anomalies.

Previous attempts at automated detection of these patterns utilize surveillance cameras combined with image processing techniques (see references in Section 7). These techniques are, however, inherently limited to the perspective of one camera. In our work, instead of employing cameras, we assume each person is wearing a device that acts as a local proximity sensor: each sensor can detect other sensors nearby. These devices can be implemented using readily available hardware such as smartphones or electronic badges (Martella et al., 2014).

Each detection between two devices corresponds to an edge in a graph which changes over time, a so-called proximity graph (Martella et al., 2014). A proximity graphs characterizes the texture of a crowd and describes how individuals navigate through the space. While a single camera can only cover a small area, proximity graphs provide a holistic view of large areas.

Extracting global movement patterns from proximity graphs is challenging since each device provides only local information. The fundamental problem that we tackle is how to uncover global patterns based on local detections.

In this work, we focus on one particular type of pattern: the formation of lanes. Lanes often appear in crowds when groups of people traverse a densely crowded space, for example, in a narrow shopping street or at a busy train station (Figure 1). We propose an automated lane-detection method based on proximity graphs. Our method combines techniques from graph embedding with a density-based clustering algorithm to identify lanes. To evaluate our method, we present a model that simulates lanes moving through a stationary crowd. Preliminary results show that our method is able to detect lanes of different sizes and shapes. Overall, our work can be seen as the first step towards rich motion pattern recognition using proximity graphs.

The remaining sections are structured as follows: Section 2 presents background information, Section 3 describes our lane-detection method, Section 4 proposes the simulation model, Section 5 & 6 present results, Section 7 describes related work, and Section 8 is dedicated to conclusions and future work.

2. Lanes and Crowds

Although many definitions exist, a crowd can generally be described as “a large group of individuals gather together in the same physical area for some duration of time”. Crowds often appear at busy public locations, such as train stations, airport terminals, football stadiums, theaters, city squares, or shopping malls. The behavior of the crowd is the results of the interactions between individuals. According to Helbing and Molnar (Helbing and Molnar, 1995), these interactions are local: each individuals influences only the people nearby. Describing a crowd using only local information is thus a natural representation.

Martella et al. (Martella et al., 2014) proposed the idea of representing the texture of crowds using proximity graphs. Formally, a proximity graph is a form of spatio-temporal graph where nodes represent individuals. Time is discretized into fixed-sized timesteps and two nodes are connected by an edge at a timestep if these two individuals have been within physical proximity of each other during that time, i.e., their distance has been less than some predetermined distance. Note that proximity graphs do not store any absolute localization data, they only describe the local “view” of each individual. Furthermore, we assume edges do not store any information on the physical distance between nodes. Previous work (Martella et al., 2014) focused on methods for extracting proximity graphs from real-world noisy data obtained using proximity sensors.

Figure 2. Example of lane in a crowd from top-down view.

Figure 2 shows an artificial example of the proximity graph for a crowd. Points represents individuals and arrows indicate their direction and speed of movement. This particular example show non-random behavior: a lane has emerge since nodes are flowing from the bottom-left corner to the right-hand edge. According to Helbing et al. (Helbing et al., 2001), the formation of lanes in crowds is a naturally occurrence. Individuals moving towards a target navigate the environment according to their own personal preferences. However, while moving through a dense crowd, they often need to step aside to prevent collisions with others. To minimize these interactions, it is beneficial for the walkers to follow behind someone moving in the same direction. The result of this local behavior is stable “highway”-like lanes through the crowd.

One quick glance is sufficient to recognize that the highlighted nodes in Figure 2 have formed a lane. However, this observation is informal and relies on the intuition of the observer. Based on this intuition, we can define three criteria for a lane.

  • Members of a lane move in a similar direction and have a similar speed

    . However, since each individual only influences its local neighborhood, each lane member should have a movement vector similar only to surrounding members. The lane as a whole can have curves and movement speed is not uniform. Though these changes are gradual and do not happen abruptly. Lanes move similar to how a river flows through a landscape.

  • A lane must be connected. In other words, if one were to create a link between each pair of lane members that are in proximity of each other, the result must be one connected unit. A lane never consist of multiple disjoint segments.

  • A lane is defined by its border, not by its contents. We identify lanes due to the abrupt transition between the movement inside and outside the lane. However, this border might not always be well-defined and can be ambiguous. This happens, for example, when someone leaves or joins the lane, thus blurring the line between the lane and the crowd.

3. Algorithm for Lane Detection

Our goal is to design an algorithm that extracts lanes from proximity graphs. First, we discuss the challenges of designing such an algorithm. Second, we present our lane-detection solution.

3.1. Challenges

The input of our lane detection algorithm is a proximity graph with nodes and edges where indicates that nodes and were close to each other at timestep . The output should be, for each timestep

, the lanes detected at that moment in time. An important decision is how to deal with nodes that are not part of a lane, such as isolated nodes or stationary crowds. We have chosen to assign each of these groups to their own cluster. This simplifies the problem of lane detection into a unsupervised classification problem where the goal is to partition the nodes into “coherent” clusters for every moment in time. Each cluster consists of people showing similar behavior.

3.1.1. Analysis of Proximity Graph

Our initial attempts at lane detection were built on the following premise: choose time window , aggregate data for every consecutive timesteps into a single dataset, and partition the resulting graph. However, this showed poor results since graph partitioning algorithms rely heavily on the presence of high density within each cluster. Proximity graphs have spatial nature in their topology, resulting in low intra-cluster density. In our experience, off-the-shelf graph partitioning algorithms and community detection algorithms tend to split long lanes into several separate clusters.

The fundamental problem is that the definition of lanes roots deeply in the notion of “distance” and “velocity”, which are difficult to formalize for proximity graphs. To be able to define these terms more explicitly, we embed the nodes into a two-dimensional space. An important observation is that the location determine by such embedding does not need to be highly accurate. For lane detection, only the local neighborhood of each node is relevant, so it is sufficient if the position of each node is accurate relative only to the nodes that surround it.

We found that techniques from graph drawing are suitable to calculate the embedding. Since proximity graphs change over time, the embedding is repeated for each timestep to adapt the previous embedding to the new topology. This adaptation produces movements of the nodes over time. The nodes can be clustered based on their position and velocity.

3.1.2. Selection of Clustering Algorithm

Choosing the right clustering method is non-trivial since lane detection present a trade-off between two problems: transitivity and ambiguity.

On the one hand, lanes can be of any arbitrary shape and they are often elongated. This means many nodes within a lane are only indirectly connected to each other. If there is a strong relation between nodes and and between and , then the nodes , , and all belong to the same cluster, even if the relation between and is weak. Transitivity must be taken into account.

On the other hand, real-world crowds often act chaotic and clustering based on individual links between nodes is sensitive to noise. For example, consider the scenario where a person leaves the lane and joins the stationary crowd. During this transition, this person will have both a strong relation with the lane and the crowd. The clustering method should correctly interpret these ambiguous links: a single“bridge” between two clusters should be not be sufficient evidence that the clusters should be merged.

Centroid-based clustering methods, such as -means (Hartigan and Wong, 1979), Mean shift (Comaniciu and Meer, 2002), or EM (Moon, 1996), are not suitable for lane detection due to transitivity. Lanes lack a “center”, making detection of elongated lanes impossible. Hierarchical methods, such as SLINK (Sibson, 1973), are unfit due to ambiguity since a single “noisy” link can cause a lane to be undetectable.

The clustering method that respect both aspects of lane detection is density-based clustering

. This class of algorithms is built on the idea that clusters correspond to dense groups of points that are separated by sparse regions. These algorithms detect clusters of any arbitrary shape, thus incorporating transitivity. They also deal well with noise, since a few outliers do not yield sufficiently high density. High quality results were obtained using DBSCAN 

(Ester et al., 1996).

3.2. Algorithm Description

Our lane-detection method consists of two stages: graph embedding and density-based clustering.

Graph embedding. For the first stage, we embed the nodes into two-dimensional space using the traditional force-directed algorithm by Fruchterman and Reingold (Fruchterman and Reingold, 1991). Force-directed graph embedding is a well-studied topic and many algorithms exist (Kobourov, 2012), but all follow a similar approach. Forces are assigned among pairs of vertices: attractive force between pairs connected by an edge and repulsive force between remaining pairs. The behavior of the system is simulated until an equilibrium state is reached.

In our method, nodes are initially randomly placed and forces are simulated until equilibrium is reached. For subsequent timesteps, we use the resulting positions from the previous run as initial positions for the next run. This allows for incremental update of the node’s positions and results in movement of the nodes over time. Computational cost is low since few iterations are needed per timestep to reach equilibrium.

Density-based clustering. Next, we cluster the nodes using DBSCAN (Ester et al., 1996), since it has proven to provide high-quality results (Ester et al., 1998) and scales to large datasets (Zhou et al., 2000). DBSCAN takes two parameters: a radius value and the minimum number of points that should lay within this radius. More specifically, let measure the “similarity” between nodes and at time . Clearly, can be defined in many different ways. We discuss several options for in Section 5. The -neighborhood of a node at time is the set of all nodes such that .

A node is referred to as a core node if the size of its -neighborhood is at least . Intuitively, core nodes are all data points “near the core” of the cluster since they have many neighbors in their proximity. Non-core nodes are found at the “border” of a cluster.

DBSCAN starts at an arbitrary node . If is a not a core node, it is labeled as noise and the procedure repeats at the next unlabeled node. If is a core node, a new cluster is created containing node . The cluster is now iteratively expanded by repeatedly adding every unlabeled node which is within distance of any core node already in . Once the cluster is complete, the entire procedure is repeated for the next unlabeled node.

There are different ways to handle noise points afterwards. We have chosen to assign each noise node to its own singleton cluster. Note that DBSCAN is not deterministic, non-core nodes can be assigned different clusters depending on the order in which nodes are processed. We randomize the processing order for each run.

4. Simulation Model

To evaluate the quality of our lane detection algorithm, we require a simulation model which accurately models lanes in a crowd. Many models for crowd simulations have been proposed, most notably the social force model (Helbing and Molnar, 1995) and its many variations (see survey by Castellano et. al. (Castellano et al., 2009)). However, while these models simulate realistic crowd dynamics, the behavior that emerges is not controlled. For example, the social force model (Helbing and Molnar, 1995) shows lane formation, but these lanes form organically and are not planned. To evaluate the accuracy of our lane detection method, our simulations need lanes to form according to some given ground truth. To the best of our knowledge, no such model currently exists.

Figure 3. Example of lane going through the crowd.

We propose a simple probabilistic model that exhibits controlled lane formation. Our model is based on random walks on the two-dimensional grid, i.e., each walker has integer coordinates. Initially, walkers are randomly placed in certain areas. Time passes in discrete steps. During each timestep, each walker can take one step in one of the four cardinal directions (north, east, south, west) according to predefined behavior. There are two types of behavior: random walkers and lane walkers.

Random walkers model a nearly stationary crowd. We define a rectangular region in which random walkers are initially placed at random locations (see Figure 3). This region can be seen as a top-down view of a public area (e.g., city square, train station, airport terminal). During each timestep, each random walker behaves according to the following rules:

  • If outside the region, take one step back.

  • Otherwise:

    • With probability

      : stay at current location.

    • With probability : take one random step.

Lane walkers model the lane going through the crowd. For each lane, we define a path consisting of a series of line segments (see Figure 3 for an example). Lane walkers are initial placed at the start of the path and they follow the path until they reach the end. During each timestep, every lane walkers adheres to the following rules:

  • With probability , follow the lane. Find point on the path closest to the position of the walker.

    • If , take step in direction of .

    • Otherwise, take one step in the direction tangent to line segment , i.e., follow the direction of the lane.

  • With probability :

    • With probability : stay at current location.

    • With probability : take one random step.

Figure 4 illustrates the walker following the lane. The parameter controls the maximum width of the lane. If , the lane walker has wandered too far off from the lane and must move closer, thus limiting the maximum width to . If , the lane walker must follow the direction of the lane. To keep walkers aligned on the grid, they take one horizontal step with probability or one vertical step with probability . The average movement vector is thus .

If walker wants to move to a new location which already occupied by another walker , then is allowed to “push” by forcing it to move to one of the three remaining locations adjacent to . This pushing mechanism models people stepping aside for others and is necessary to prevent bottlenecks where lane walkers are blocked by stationary random walkers. If all three adjacent locations are already occupied, the move by fails and it remains at its current location. In other words, “pushing” is not transmissible: walkers which get pushed cannot also push other walkers.

Figure 4. Two scenarios of a lane walker following a path.

The parameters and control the difficulty of detecting the lane. For , the random walkers are completely stationary and only the lane walkers move, while for the random walkers act chaotic. For , the lane walkers move at maximum speed, while for the lane walkers show same behavior as random walkers. Changing the value of or has an impact on the difficulty of lane detection.

5. Experimental Setup

We evaluated our lane detection algorithm as described in Section 3 using data generated using the model from Section 4. We describe the three similarity functions and the three scenarios we consider.

5.1. Similarity Scores

As discuss in Section 3, we are required to define a function that measures the similarity between two nodes. Low scores indicates a strong relation (i.e., nodes belonging to the same lane), while high scores indicate a weak relation. We explore three possible options for this function. The parameter is the size of the window, it determine how far we look “back in time”.

Score function A: Calculate the maximum physical distance between two nodes over the last timesteps. The intuition is that two nodes belong to the same lane if they are physically close to each other for a long period of time. Let be the position of node at time . The score function is defined as follows:

Score function B: One issue with option A is that one might need a very large window size to detect the lane since two nodes can physically close for a longer duration of time while not belonging to the same lane (for example, with a horseshoe shaped lane). Alternatively, define the velocity vector of node as the average distance traveled per timestep over the last timesteps:

Given the current position and velocity of a node, we can predict its expected position timesteps into the future.

Score function C: Instead of comparing the expected future position of two nodes, we can also compare only the expected displacement. The intuition is that if two nodes are close to each other and show similar displacement, they most likely belong to the same lane. A simple way to formalize this is as follows:

(1)
Figure 5. Three different lane scenarios used: (1) one straight lane, (2) curved lane, (3) two parallel straight lanes.

5.2. Scenarios

For evaluation, we consider a scenario where random walkers are placed in a square region of units (see Figure 5). Lane walkers are placed in regions north of this square and walk south. The lane regions have width and height . The density of both regions is equal to ensure the number of random and lane walkers is equal. Unless noted otherwise, we set , , and density is . We further experiment with these parameters in Section 6.

Our algorithm is performed during each timestep, starting at time and ending either once the last walker exits the region or until timesteps have passed. For every timestep of the simulation, our algorithm yields a partition of the population into cohesive clusters.The ground-truth clusters of the model are where is the set of random walkers and is the set of lane walkers. For the scenario with two lanes, there are three ground-truth clusters. We use the normalized mutual information (Vinh et al., 2010) (NMI) score to measure the correlation between the two partitions. The range is between (no correlation) and (perfect clustering). The reported NMI is the average over the entire simulation.

6. Empirical Evaluation

In this section we present the preliminary results of our method. In Section 6.1, we evaluate the three proposed similarity functions and tune the parameters of the algorithm for a simple scenario. In Section 6.2, we consider a variety of scenarios with lanes of different widths and shapes. In Section 6.3, we evaluate the resilience of our method by varying the parameters of the simulation model.

In Section 6.1, 6.2, and 6.3, the graph embedding phase of the algorithm is omitted, i.e., coordinates from the simulation are directly used for clustering. This allows for evaluation of DBSCAN in isolation. Finally, in Section 6.4, we revisit the problem of graph embedding.

6.1. Method Tuning

Figure 7. One straight lane, different values of .
Figure 6. One straight lane, different window sizes.
Figure 7. One straight lane, different values of .
Figure 8. One straight lane, different values of .
Figure 6. One straight lane, different window sizes.

In this section, we focus on tuning of the parameter for DBSCAN. Four parameters are of interest: , , , and . Unless noted otherwise, we use values , , and . We only consider the simple scenario of one straight lane.

The crucial parameter is . For all three similarity functions, this parameter can be interpreted as the maximal physical distance that is allowed between two nodes over some period of time. If is too small, then DBSCAN is too rigid, and many tiny clusters appear. If is too large, then DBSCAN is too tolerant and all nodes collapse into a single cluster.

First, we consider how the window size affects the results. Figure 8 shows the results for different window sizes. We see that the lower bound of is approximately 10, regardless of the chosen similarity function. This can be explained based on the width of the lane. If is less then the lane width, the random walkers on opposite sides of the lane are no longer connected since they are two far apart, causing the random walkers to be split into two groups.

The upper bound of depends heavily on the chosen similarity function. For function A, the upper bound scales linearly with . This is expected since larger implies that we look further back in time and thus the maximal distance between lane and non-lane walkers increases. For functions B and C, the upper bound also scales with but is limited to approximately for B and for C. This happens since the window size is used to calculate the velocity of nodes. For large values of , the velocity converges to for random walkers and for lane walkers.

Next, we evaluate how affects the results for functions B and C, see Figure 8. In both cases, the lower and upper bound of scale linearly with the value of . This can be explained since for large values of , the similarity score is dominated by the term . For larger , the similarity score between neighboring lane walkers increases, which results in a wider valid range of .

Now we turn our attention to how affects quality. Figure 8 shows results for different values of and . The figure shows that the algorithm is not sensitive to the value of : the upper and lower bound of is minimally affected by its value. This parameter determines the robustness against outliers, but one straight lane contains little ambiguity.

From Figures 8, 8, and 8, we conclude that function C performs the best. The valid range of for this function is approximately between and . This range scales linearly when increasing , but is barely affected when varying or . The minimum value of should be , but larger values make the algorithm less sensitive to the exact choice of . For the remainder of this work, we focus solely on function C.

(a) Lane width of 20
(b) Lane width of 30
Figure 9. Lanes which are too wide cause split in stationary crowd. Each point corresponds to a walker. Different colors indicate different clusters.

6.2. Different Types of Lanes

In this section, we experiment with various types of lanes to test how well our algorithm performs in different scenarios. Figure 5 shows the three cases which we discuss.

First, we consider the scenario where the width of the lane varies between and units (i.e., the lane lane cover 5-50% of the region of interest). Figure 12 shows the results. For and , the lane can only be detected if its width is below approximately 20 units and is roughly between and . For wider lanes, the walkers in the stationary crowd on opposite sides of the lane are no longer considered to belong to the same cluster since they are too far apart. Figure 9 demonstrates this problem.

By increasing the value of or , wider lanes can be detected. Figure 12 shows that both for and for , lanes having a width up to 30 units can be discovered. For both cases, increasing the width of the lane also increases the lower bound of , for which the lane is detected.

Figure 11. Two parallel lanes for variable interlane distance.
Figure 10. Baseline scenario for different lane widths.
Figure 11. Two parallel lanes for variable interlane distance.
Figure 12. Sinusoidal lane for different amplitudes.
Figure 10. Baseline scenario for different lane widths.

Next, we consider the scenario for two parallel lanes, moving in the same direction, both of width 10, and having a fixed interlane distance. Figure 12 shows the results for this scenario for different interlane distances. We see that for and , quality deteriorates once the lanes are less than units apart. This happens because the lanes are too close and can no longer be separated into two distinct clusters. The valid range of is approximately in the range .

The figure shows that doubling the value of increases the minimum separation distance to . By increasing , more weight is put on velocity when measuring similarity, thus making it more difficult to distinguish the two lanes. Doubling the value of does not increase the minimum separation distance and increases the upper bound of to .

Finally, to test how well our method deals with curved lanes, we consider a scenario with a single sinusoidal lane. Figure 12 shows the results for sinusoidal lanes having amplitude up to units. Note that an amplitude of units is an extreme case, considering the height of our region is just units.

(a) Amplitude of 10.
(b) Amplitude of 20.
Figure 13. Visualization of sinusoidal lane for , . Each point corresponds to a walker. Different colors indicate different clusters.

The results show that lanes having an amplitude up to approximately units can be detected. For larger amplitudes, the algorithm tends to split the lane into multiple straight segments. Figure 13 shows an example of this phenomenon. Increasing the value of does not change the accuracy of our method. Increasing the window size to significantly increases the accuracy and allows to detect lanes having an amplitude up to 30 units. The explanation is that a larger window size smooths out the sharp turning angles of the wave.

6.3. Resilience

Figure 14. One straight lane for varying or .

To test the resilience of our method, we vary and , see Figure 14.

The value of determines the probability that a random walker takes a random step during a timestep. If the value of is high, detecting the lane becomes more difficult since random walkers behave erratic. Figure 14 confirms this intuition. The lane can be detected for .

The value of determines the probability that a lane walker follow the lane during a timestep. If the value of is too low, detecting the lane becomes difficult because the velocity of the lane is too low. The results show that the lane can only be detected if . Increasing the value of does not change the lower bound of but its upper bound scales linear.

For both scenarios, the detectability of the lane can be improved by using a larger window size . A larger window implies that velocity is determined over longer period of time, thus containing less noise.

6.4. Graph Embedding

(a) At timestep 0.
(b) At timestep 300.
Figure 15. Results of proximity graph embedding.
Figure 16. Accuracy with/without embedding at time. Note the different scales on the horizontal axes.

Up until this point, the graph embedding phase has been omitted, i.e., the absolute coordinates of the nodes are directly passed to the clustering phase. To evaluate the complete algorithm, we first generate a proximity graph and embed the nodes into two-dimensional space using force-directed embedding (see Section 3) before clustering the nodes.

In practice, we find that the absolute coordinates and the coordinates found by embedding are approximately equivalent, up to scale and rotation. For example, Figure 15 shows an embedding of one straight lane at different moments. The proximity graph was created using a detection radius of 25 units. Force directed embedding works well for our method since the data is spatial by nature.

Since graph embedding shows excellent results, the effect on the accuracy of DBSCAN is minor. Figure 16 shows a comparison of the obtained NMI with and without embedding for one straight lane at timestep 200. Both curves are nearly identical. Note that the difference in range of is the result of graph embedding not preserving the scale.

7. Related Work

Utilizing proximity graphs to analyze the behavior of people has proven to be a promising area of research. Martella et al. showed how proximity graphs can be used to mine the behavior of museum visitors (Martella et al., 2016), track people in a six-story building using only a handful of anchor points (Martella et al., 2017), and capture the social interactions at an IT conference (Martella et al., 2014). However, further research on proximity graphs has been scarce.

In computer vision, the analysis of crowd behavior is an active field of research. Most work focuses on automated analysis of surveillance camera footage. We discuss some of the recent major contributions in this section. We refer to the survey by Li et al. 

(Li et al., 2015) for a comprehensive overview of research on crowd analysis from the area of computer vision.

One particular topic from computer vision which is related to lane detection is crowd behavior analysis (Li et al., 2015)

. These algorithms classify the behavior of people in crowds.

For example, Rodriguez et al. (Rodriguez et al., 2011) proposed a data-driving crowd analysis approach. The algorithm works by first learning common crowd motion patterns from a large database containing crowd videos. To analyze a new video, the frames are split up into blocks which are matched to learned patches from the database. By labeling the learned patches, one can classify the behavior in different regions of the video. The authors argue that, while the number of all possible videos is infinite, the space of recognizable crowd patterns might not be all that large.

Benabbas et al. (Benabbas et al., 2010) presented a method that can detect six crowd-related events in videos: walking, running, splitting, dispersion, and evacuation. The method works by tracking objects of interest using optical flow techniques. Next, the camera view is divided into fixed-sized blocks. For each block, the most dominant movement vectors are determined, where is a user-defined parameter. Blocks are clustered using a region-based segmentation algorithm. Finally, each cluster is classified into one of six events based on the average movement vector within the cluster.

Solmaz et al. (Solmaz et al., 2012)

showed how five types of behavior can be extracted from video: bottlenecks, fountainheads, lanes, arches, and blocking. Their method moves particles according to the optical flow of the video. Each region is then classified using the Jacobian matrix based on the linear approximation of the trajectories within the region. The eigenvalues of this matrix determine which of the five types the behavior belongs to.

Another topic related to lane detection and which has receive much attention in computer vision is crowd motion segmentation (Li et al., 2015). These algorithms segment the video into motion patterns, i.e., spatial regions that have a high degree of similarity in terms of speed and direction.

For instance, Ali et al. (Ali and Shah, 2007) used techniques from computational fluid dynamics for motion segmentation. A flow field is generated from frames of a moving crowd. From the flow field, a finite-time Lyapunov exponent field is constructed, which shows the Lagrangian Coherent Structures (LCS) in the underlying flow. The LCS highlight the boundaries of a flow segments and they are used for segmentation.

Kang & Wang (Kang and Wang, 2014)

demonstrated how neural networks can be used to for crowd segmentation. First, they show how to use fully convolutional neural networks to segment individuals from single static frames from videos of crowds. Next, they extend this method by integrating motion cues to capture movement, helping to separate stationary and moving crowds, and structure cues, such as walls and floors. The results show tight segmentation contours around individuals.

Zhao & Medioni (Gong et al., 2012) presented a method based on manifold learning and tracklets. A tracklet is a short fragment of an object’s trajectory obtained by tracking the object for short amount of time. The tracklet points are mapped to points in space, where corresponds to the image space and represents the motion direction in degrees between and

. In this 3D space, points form manifold structures each corresponding to a motion patterns. The author propose a robust manifold grouping algorithm based on Tensor Voting to extract the manifolds.

The use of proximity graphs show two clear advantages over utilizing cameras. First, proximity graphs can provide a holistic view over a large areas. They can be used to monitor the behavior of crowds within one single build building, a small neighborhood, or even an entire city. Cameras are inherently limited to one perspective and there seems little research on how to “join” the image analysis from multiple cameras. Second, many computer vision techniques take a coarse-grained approach and classify regions within the image, meaning any information about individuals is lost. Our approach classifies nodes of the proximity graphs, thus retaining this fine-grained information.

Overall, we believe our method is the first lane detection algorithm designed for proximity graphs.

8. Conclusions & Future Work

In this work, we present a method to detect lanes in proximity graphs. Our method combines graph embedding with density-based clustering. For evaluation, we have explored three different score functions to measure similarity between nodes. Best performance was obtained by measuring similarity as the maximum over two terms: difference in position (distance) and difference in velocity. The results show that our method can detect different types of lanes (thick lanes, parallel lanes, and curved lanes). Graph embedding performs excellent, although its computational cost is high. For DBSCAN, exact tuning of the parameters is important. Most notably, DBSCAN shows sensitivity to the value of .

For future work, we are exploring methods to automatically determine the best parameters for DBSCAN. Furthermore, we are looking into more complex scenarios. For example, opposing lanes, lanes crossing at an intersection, and lanes moving through a narrow doorway. We are also extending our simulation model to support more situations, such as people joining/leaving the lane or a lane dissolving into the crowd. Finally, we are working on obtaining real-world measurements to evaluate our method on non-synthetic datasets.

Overall, we view our work as a first step towards rich pattern recognition in proximity graphs. One can think of many types of crowd behavior identification, such as detection of congestion, social cliques, evacuations, and anomalies. Our goal is to utilize proximity graphs as a tool to enable these types of analysis.

References

  • (1)
  • Ali and Shah (2007) Saad Ali and Mubarak Shah. 2007. A lagrangian particle dynamics approach for crowd flow segmentation and stability analysis. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on. IEEE, 1–6.
  • Benabbas et al. (2010) Yassine Benabbas, Nacim Ihaddadene, and Chaabane Djeraba. 2010. Motion pattern extraction and event detection for automatic visual surveillance. EURASIP Journal on Image and Video Processing (2010).
  • Castellano et al. (2009) Claudio Castellano, Santo Fortunato, and Vittorio Loreto. 2009. Statistical physics of social dynamics. Reviews of modern physics (2009).
  • Comaniciu and Meer (2002) Dorin Comaniciu and Peter Meer. 2002. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence 24, 5 (2002), 603–619.
  • Ester et al. (1998) Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. 1998. Clustering for mining in large spatial databases. KI (1998).
  • Ester et al. (1996) Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, and others. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD.
  • Fruchterman and Reingold (1991) Thomas MJ Fruchterman and Edward M Reingold. 1991. Graph drawing by force-directed placement. Software: Practice and experience (1991).
  • Gong et al. (2012) Dian Gong, Xuemei Zhao, and Gerard Medioni. 2012. Robust Multiple Manifolds Structure Learning. In

    Proceedings of the 29th International Conference on Machine Learning (ICML-12)

    . 321–328.
  • Guo and Huang (2008) RY Guo and Hai-Jun Huang. 2008. A mobile lattice gas model for simulating pedestrian evacuation. Physica A: Statistical Mechanics and its Applications (2008).
  • Hartigan and Wong (1979) John A Hartigan and Manchek A Wong. 1979.

    Algorithm AS 136: A k-means clustering algorithm.

    Journal of the Royal Statistical Society. Series C (Applied Statistics) 28, 1 (1979), 100–108.
  • Helbing (1998) Dirk Helbing. 1998. A fluid dynamic model for the movement of pedestrians. arXiv preprint cond-mat/9805213 (1998).
  • Helbing and Molnar (1995) Dirk Helbing and Peter Molnar. 1995. Social force model for pedestrian dynamics. Physical review E (1995).
  • Helbing et al. (2001) Dirk Helbing, Peter Molnar, Illes J Farkas, and Kai Bolay. 2001. Self-organizing pedestrian movement. Environment and planning B: planning and design (2001).
  • Kang and Wang (2014) Kai Kang and Xiaogang Wang. 2014. Fully convolutional neural networks for crowd segmentation. arXiv preprint arXiv:1411.4464 (2014).
  • Kobourov (2012) Stephen G Kobourov. 2012. Spring embedders and force directed graph drawing algorithms. arXiv preprint arXiv:1201.3011 (2012).
  • Li et al. (2015) Teng Li, Huan Chang, Meng Wang, Bingbing Ni, Richang Hong, and Shuicheng Yan. 2015. Crowded scene analysis: A survey. IEEE Transactions on Circuits and Systems for Video Technology (2015).
  • Martella et al. (2017) Claudio Martella, Marco Cattani, and Maarten van Steen. 2017. Exploiting Density to Track Human Behavior in Crowded Environments. IEEE Communications Magazine 55, 2 (2017), 48–54.
  • Martella et al. (2014) Claudio Martella, Matthew Dobson, Aart van Halteren, and Maarten van Steen. 2014. From proximity sensing to spatio-temporal social graphs. In Pervasive Computing and Communications (PerCom). IEEE.
  • Martella et al. (2016) Claudio Martella, Armando Miraglia, Jeana Frost, Marco Cattani, and Maarten van Steen. 2016. Visualizing, clustering, and predicting the behavior of museum visitors. Pervasive and Mobile Computing (2016).
  • Martella et al. (2014) Claudio Martella, Maarten van Steen, Aart Halteren, Claudine Conrado, and Jie Li. 2014. Crowd textures as proximity graphs. IEEE Communications Magazine (2014).
  • Moon (1996) Todd K Moon. 1996.

    The expectation-maximization algorithm.

    IEEE Signal processing magazine 13, 6 (1996), 47–60.
  • Rodriguez et al. (2011) Mikel Rodriguez, Josef Sivic, Ivan Laptev, and Jean-Yves Audibert. 2011. Data-driven crowd analysis in videos. In Computer vision (ICCV), 2011 IEEE international conference on. IEEE, 1235–1242.
  • Sarmady et al. (2011) Siamak Sarmady, Fazilah Haron, and Abdullah Zawawi Talib. 2011. A cellular automata model for circular movements of pedestrians during Tawaf. Simulation Modelling Practice and Theory 19, 3 (2011), 969–985.
  • Sibson (1973) Robin Sibson. 1973. SLINK: an optimally efficient algorithm for the single-link cluster method. The computer journal (1973).
  • Solmaz et al. (2012) Berkan Solmaz, Brian E. Moore, and Mubarak Shah. 2012. Identifying Behaviors in Crowd Scenes Using Stability Analysis for Dynamical Systems. IEEE Transactions on Pattern Analysis and Machine Intelligence (2012).
  • Vinh et al. (2010) Nguyen Xuan Vinh, Julien Epps, and James Bailey. 2010. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research (2010).
  • Zhan et al. (2008) Beibei Zhan, Dorothy N Monekosso, Paolo Remagnino, Sergio A Velastin, and Li-Qun Xu. 2008. Crowd analysis: a survey. Machine Vision and Applications (2008).
  • Zhou et al. (2000) Aoying Zhou, Shuigeng Zhou, Jing Cao, Ye Fan, and Yunfa Hu. 2000. Approaches for scaling DBSCAN algorithm to large spatial databases. Journal of computer science and technology (2000), 509–526.