Are You in the Line? RSSI-based Queue Detection in Crowds

08/02/2018
by   Fang-Jing Wu, et al.
NEC Corp.
0

Crowd behaviour analytics focuses on behavioural characteristics of groups of people instead of individuals' activities. This work considers human queuing behaviour which is a specific crowd behavior of groups. We design a plug-and-play system solution to the queue detection problem based on Wi-Fi/Bluetooth Low Energy (BLE) received signal strength indicators (RSSIs) captured by multiple signal sniffers. The goal of this work is to determine if a device is in the queue based on only RSSIs. The key idea is to extract features not only from individual device's data but also mobility similarity between data from multiple devices and mobility correlation observed by multiple sniffers. Thus, we propose single-device feature extraction, cross-device feature extraction, and cross-sniffer feature extraction for model training and classification. We systematically conduct experiments with simulated queue movements to study the detection accuracy. Finally, we compare our signal-based approach against camera-based face detection approach in a real-world social event with a real human queue. The experimental results indicate that our approach can reach minimum accuracy of 77 significantly outperforms the camera-based face detection because people block each other's visibility whereas wireless signals can be detected without blocking.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

08/02/2018

Together or Alone: Detecting Group Mobility with Wireless Fingerprints

This paper proposes a novel approach for detecting groups of people that...
08/02/2018

We Hear Your Activities through Wi-Fi Signals

In this paper we focus on the problem of human activity recognition with...
10/10/2012

Kinects and Human Kinetics: A New Approach for Studying Crowd Behavior

Modeling crowd behavior relies on accurate data of pedestrian movements ...
08/16/2020

Erlang Redux: An Ansatz Method for Solving the M/M/m Queue

This exposition presents a novel approach to solving an M/M/m queue for ...
10/22/2020

How wireless queues benefit from motion: an analysis of the continuum between zero and infinite mobility

This paper considers the time evolution of a queue that is embedded in a...
09/02/2021

A Reliable, Self-Adaptive Face Identification Framework via Lyapunov Optimization

Realtime face identification (FID) from a video feed is highly computati...
09/02/2020

Cost-aware Feature Selection for IoT Device Classification

Classification of IoT devices into different types is of paramount impor...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recently, crowd behaviour analytics

has attracted much attention and has boosted many promising applications such as crowd detection and estimation for public safety

[1][2], social activity analytics [3], and space syntax analytics for exploring new business opportunities [4]. These applications exploit Internet-of-Things (IoT) sensing technology with ambient sensors, Wi-Fi and Bluetooth Low Energy (BLE) sniffers, and built-in sensors of smartphones to capture human behaviour. Meanwhile, many research efforts have paid attention to individual human activity analytics such as transportation activity detection [5] and daily activity recognition [6]. However, compared to the human activity detection, crowd behaviour analytics focuses more on the behavioural characteristics of targeted groups of people instead of individual activities.

Fig. 1: An overview of queuing behaviour detection system.

This work focuses on human queuing behaviour (”queuing behaviour” for short) which is a unique type of crowd behaviour created by groups of people rather than each individual. In reality, humans queue up for a specific purpose such as buying tickets, taking lifts in ski resorts, and waiting for taxis in which the starting point of a queue is known. Their movements observed by other people have unique patterns such as movements towards the same direction, small-scale and slow movements, and periodic movements. On the other hand, since Wi-Fi/BLE-enabled smart devices (e.g., smartphones and wearable devices) become more popular, the RSSIs in wireless packets from these devices carried by crowds provide insightful clues to capture these unique crowd mobility patterns. Thus, we consider wireless signal sniffers as crowd mobility observers which will capture broadcast packets (e.g, Wi-Fi probe request packets and BLE advertising packets) from mobile devices. This paper considers the BLE technology to implement a proof-of-concept prototype due to the privacy concern, where opt-in data collection is adopted 111For experimental purpose, we collected data only from a specific set of BLE beacons carried by participants.. The system consists of multiple signal sniffers to capture queuing behaviour in crowds and extract feature patterns of queuing behaviour based on only RSSIs. Fig. 1 shows an overview of our system which contains data sources, gateways, data analytics, and applications. These layers and the system is explained in detail in Section III. In this work, we deploy three sniffers to observe RSSI changes of crowds’ devices 222In general, more sniffers can provide richer information for detecting queuing behaviour, and we can deploy these sniffers strategically.. One sniffer is deployed at the starting point of a potential queue such as a service counter, and another two are deployed at the left-hand side and the right-and side along the queue. The goal of our system is to determine if a person is in the queue based on the changes of RSSIs.

However, our system uses only RSSI information to extract queueing behaviour patterns which raises the following technical challenges.

  • Noisy and fluctuating RSSIs: The RSSIs may vary dramatically over time, even in the case that people are staying static and closer to the sniffers.

  • Heterogeneity of antenna sensitivity: The sensitivity of devices’ antenna are different from each other. Specifically, the RSSIs of same device observed by different sniffers may be very different from each other. Similarly, when multiple devices carried by the same person, the RSSIs of these devices observed by the same sniffer have different variations.

  • Cross-entity feature extraction: Since queuing behaviour is created by groups of people, extracting feature patterns between multiple devices based on multiple observers (e.g., sniffers) is a new challenge.

To address the above technical issues, the key idea of our work is to extract feature patterns of crowd behaviour not only from each individual device’s data but also from cross-device data and cross-sniffer data. Specifically, we consider three types of feature extraction: (1) single-device feature extraction, (2) cross-device feature extraction, and (3) cross-sniffer feature extraction

. The first type of features considers each individual device’s RSSI variation when the device’s owner makes movements along the queue. The second type of features considers the mobility similarity between multiple devices when these devices’ owners make movement together along the queue. The third type of features considers the mobility correlation observed by multiple sniffers when a particular device’s owner makes movements along the queue. Based on the three types of features extracted from individuals and heterogeneous entities, we use well-known classifiers implemented by Weka

[7] to verify the merit of these features. Note that our work mainly focuses on designing a holistic and plug-and-play solution to the queue detection problem using Internet-of-Things (IoT) sensing technology instead of designing classification algorithms. The proposed proof-of-concept prototype system can be extended to the Wi-Fi-based sniffing technology, where Wi-Fi probe request packets are captured for queue detection. We systematically conduct experiments with simulated queue movements to investigate how different parameters and setups affect detection accuracy. Finally, we compare our signal-based approach against camera-based face detection approach in a real-world team-building social event, where people queued up for taking cakes, biscuits, and drinks. The experimental results indicate that our approach can reach minimum accuracy of and it significantly outperforms the camera-based face detection approach since in the RSSI-based detection wireless signals are not blocked by the crowd themselves as in the case of camera-based detection.

Ii Related Work

Recently, some research studies have paid attention to queue detection [8] [9][10][11][12][13][14]. In [8], an acoustic sensor network is deployed along road segments to monitor traffic queue. The work in [9] designs a system to detect queuing activities of taxis and passengers based on their GPS information. Research efforts in [10][11] consider video-based approaches to detect vehicle queues. In [13], a mobile application is designed to capture human behaviour and detect queueing behaviour, where built-in sensors (e.g., accelerometers, compasses, and Bluetooth) are exploited for capturing human mobility information. The work in [14] proposes an RSSI-based approach to detect three statuses of queuing behaviour including waiting period, service period, and leaving period based on a single Wi-Fi sniffer’s observations.

However, sensor-based approaches require well-deployed infrastructure, while GPS-based approaches work only in outdoor environments. Mobile sensing approaches require mobile applications to be pre-installed in people’s smartphones. Video-based approaches may compromise personal privacy and they are easily affected by light conditions and crowd blocking of visible area. The RSSI-based approach in [14] focuses on RSSI features from each individual device, while we consider not only single-device features but also cross-device features and cross-sniffer features since unique behavioural characteristics of human queues are created by groups of people as opposed to individuals.

Iii Queuing Behaviour Detection

Iii-a Queue Detection Problem

Assume that the starting point of a queue (e.g., service counter) is known. We consider three signal sniffers, denoted , , and , where the BLE sniffing technology is used in this work due to privacy concern. One sniffer is deployed in the starting point and another two sniffers are symmetrically deployed along a straight line from the stating point towards the direction of a human queue. Suppose that each visitor carries a BLE transmitter in a designated environment (e.g., exhibitions). For a given time window , there are three time series data streams captured by the deployed three sniffers, denoted by , , and , where each is a sequence of BLE advertising packets. Each BLE advertising packet contains the RSSI and the device ID.

Definition 1

Given three time series data streams, , , and captured by , , and during a time window , for each device captured by the three sniffers, queue detection problem is to determine if the device is in the queue which is starting from .

Iii-B System Design

Fig. 1 shows our system design for queuing behaviour detection which is composed of four layers: data sources, gateways, data analytics, and applications. BLE sniffers serve as data sources which capture all types of BLE packets from users’ devices and report to the gateway layer. The crowd mobility sensing gateway registers the data types of interests and keeps updating on new data arrivals, where BLE advertising packets are in our interests. Meanwhile, the crowd mobility sensing gateway updates the crowd mobility database when there is new update on received BLE advertising packets. In addition to the crowd mobility database, there are three data analytical components: (1) data preprocessing, (2) feature extraction, and (3) queue behavior model training and classification. The data preprocessing performs data aggregation and low-pass filtering to eliminate noise form the collected RSSIs. The feature extraction identifies the mobility patterns from three aspects: single-device features, cross-device features, and cross-sniffer features. Single-device features are extracted from each individual device’s RSSIs. Cross-device features are extracted from multiple user devices’ RSSIs based on their mobility similarity. Cross-sniffer features are extracted from observations by multiple sniffers based on the correlation of their observations. Finally, the queue behavior model training and classification conducts off-line training for detecting the status of a device, where the trained classifier makes a binary decision on the device status (e.g., ”in-queue” or ”not-in-queue” statuses). Finally, the application layer is a knowledge consumer which requires results of queuing detection for improving user experience or services provided by operators.

Iv Queuing Behaviour Data Analytics

In this section, we first describe the methodology for collecting labeled data and explain the the proposed algorithm for feature extraction. Then, we describe the means for queuing behaviour classification.

Iv-a Labeled Data Collection

Fig. 2 (a) is our deployment for collecting labeled data, where human mobility behaviour can be categorized into three types. The first type of mobility behaviour is collected from in-queue devices which have movements periodically. The second type of mobility behaviour is collected from not-in-queue devices which take random walks. The third mobility behaviour is collected from not-in-queue devices which stay static at certain locations. We launch such data collection campaigns in an indoor office environment for collecting labeled data.

(a) (b)
Fig. 2: (a) Deployment for collecting labelled data. (b) Data preprocessing results.

Iv-B Data Preprocessing

Since RSSIs are noisy, we perform data aggregation and low pass filter to clean up noise in the collected raw data. First, in the data aggregation phase, we aggregate RSSI data streams from each device every seconds. Then, we apply the dynamic exponential smoothing filter (DESF) to the aggregated RSSI data streams. We implement the DESF as follows. The -th output sample is if . Otherwise, . Here, is the -th input sample and is a predefined parameter. Fig. 2 (b) shows the data before and after data preprocessing, where the green one is raw data and the red one is the processed data after data aggregation and low-pass filtering.

Iv-C Feature Extraction

We extract three types of features, (1) single-device features, (2) cross-device features, and (3) cross-sniffer features, from preprocessed data. The first type is mobility characteristic extracted from each individual device’s RSSIs. The second type is mobility similarity between crowds’ devices. The third type is the mobility correlation observed by multiple observers (e.g., sniffers). Our work extracts nine features in total from the three aspects. Below, we first make observations from the collected ground truth data to identify significant features of in-queue devices and then define the mechanisms to extract the features for the off-line training in the next step. The details of the proposed feature extraction mechanisms are explained in the following subsections.

(a) (b) (c)
Fig. 3: Single-device features: (a) Feature 1: positive accumulated slopes, (b) Feature 2: approaching-counter patterns, and (c) Feature 3: near-counter RSSIs.

C.1 Single-device feature extraction

Let denote the captured BLE packets during time window for device . For a given time window , for an observed device , we define seven feature extraction functions to compute the seven single-device features based on only RSSIs as follows.

Feature 1: Positive accumulated slopes: Fig. 3 (a) shows the accumulated RSSI slopes of an in-queue device. As it can be seen, the accumulated RSSI slopes are all positive as the device moves closer to the starting point. Thus, we define the accumulated slope for a given time window as , where and are the RSSIs at the timestamps and , respectively.

Feature 2: Approaching-counter patterns: In Fig. 3 (b), when the in-queue device is already very close to the starting point, the RSSIs dramatically increase even though the in-queue device makes only a little movement towards the starting point. Based on the experimental observation and the signal propagation theory, the function of distance to RSSI changes is not linear. We then define the following binary function to extract this nature feature pattern from RSSIs:

Here, is a pre-defined threshold. When there are dramatic changes on RSSIs during , the output value of the binary function is 1 since the device is approaching closer to the starting point. Otherwise, the output of the binary function is 0.

Feature 3: Near-counter RSSIs: Fig. 3 (c) shows that the RSSIs changes as these in-queue devices move closer to the starting point. As it can be seen, when devices stay near the staring point, the RSSIs are higher than devices far away from the starting point. We thus define the following binary function to extract this feature based on RSSIs.

Here, is a pre-defined threshold. When all of captured RSSIs during the time window , the output value of the binary function is 1 since the device is considered very close to the starting point. Otherwise, the output value of of the binary function is 0.

Feature 4: RSSI stability observed by

: The RSSI variances of in-queue devices are smaller compared to the RSSI variances of random walks since in-queue devices make movements slowly along the queue towards

. To track the RSSI changes, we consider backtracking time windows to extract this feature patterns. For each given time window , we backtrack time windows together with the current observations during to compute RSSI variance as

Here, is the variance function of given sets of captured BLE packets’ RSSIs during these historical time windows.

Feature 5: RSSI stability observed by : Similarly, we can extract RSSI stability observed by the sniffer as:

Here, the size of the backtracking time windows is the same as in the feature 4.

Feature 6: RSSI stability observed by : Similarly, we can extract RSSI stability observed by the sniffer as:

Feature 7: Longer stay duration observed by : In-queue devices have longer stay durations compared to not-in-queue devices which have wandering random walks. We thus define the following function to extract this feature from packets captured by the sniffer .

where is is the timestamp when the latest advertising packet received by the sniffer and is the timestamp when the first advertising packet is received by the sniffer .

Fig. 4: Cross-device feature extraction.

C.2 Cross-device feature extraction

Since queuing behaviour is created by crowds and not by an individual, similar mobility behaviour of crowds creates similar RSSIs patterns to each other. Next, we first identify such kind of cross-device features based on experimental observations and define feature extraction function to speed up computation.

Feature 8: Mobility similarity: Fig. 4 shows the visualization of correlation matrix between all pair of in-queue devices’ RSSIs captured by the sniffer . As it can be seen, there are high correlations between RSSIs of in-queue devices when they are queuing up for a certain purpose. Although the sensitivity of devices’ antennas are different from each other as shown in Fig. 3 (c), their RSSI patterns are similar to each other. Devices far away from the sniffer have higher correlation coefficients with others because of sufficient RSSI data samples. However, computing movement similarities between all pairs of devices incurs high computational complexity in the real world since there are many combinations of devices. Therefore, we design the following low-complexity feature extraction function to speed up the computation.

  1. Sort devices based on the RSSI stability observed by (i.e.,

    ). The idea is to cross-check those devices with more stable RSSIs first because they are probably in the queue.

  2. Let and denote the BLE packets captured by during time window for devices and , respectively. The movement similarity function between and is defined as

    Here, computes the correlation coefficient between two given sequences of RSSIs in BLE packets and BLE packets . We backtrack time windows to compute the correlation coefficient between and .

  3. For a given and the sorted list in 1), we can compute movement similarity between and devices in the sorted list one by one until there are at least devices which have correlation coefficients greater than . In this case, the . Otherwise, . Here, and are predefined thresholds.

Note that the above computation for this feature extraction can be terminated early if we can find devices which meet the above condition. In this case, we can speed up the computation instead of computing all pairs of combinations.

Fig. 5: The workflow of the queue behavior model training and classification.
Fig. 6: Implementation of sniffers and controllable experimental setup with a simulated queue.

C.3 Cross-sniffer feature extraction

When an in-queue device makes a sequence of movements along the queue, there is a high correlation between the observations captured by the sniffers on both sides. Note that this feature exploits consensus between sniffers, while cross-device features focus on the mobility similarities between devices. Thus, we consider the cross-sniffer consensus to extract this feature as follows.

Feature 9: Mobility correlation: For a device , the mobility correlation observed by the sniffer and during time window is defined as follows.

Here, we backtrack time windows to compute the correlation coefficient for a device ’s RSSIs captured by and .

Fig. 7: Detection accuracy vs. the number of backtracking time windows.
Fig. 8: Detection accuracy vs. the size of time window.

Iv-D Queue Behavior Classification

Fig. 5

shows the working flow of the designed queue behavior model training and classification. After we extract the above features, these features are used for off-line model training. Here, we consider three existing classifiers: J48Tree, RandomForest, and Naive Bayes since all of these three classifiers work well for numerical and categorical data and incur lower computation overhead which can support real-time detection tasks in the future. After the model training phase, for each piece of the testing data, the classifier determines if the device corresponding to the testing data is in a queue.

V Evaluation

V-a Implementation and Experimental Setup

We implement a packet sniffing program in Python for Raspberry Pi platforms. We use BLE sensing technology as an example for the prototype of our system. Practically, Wi-Fi sensing technology can be an alternative mechanism to capture human mobility through monitoring probe requests from smartphones. Fig. 6 (a)-(b) is the software and hardware components of our sniffers. The packet monitor captures all of types of BLE packets. The packet filter and decoder parses only BLE advertising packets and discards other types of BLE packets. The data reporter updates the new packet arrivals to the crowd mobility sensing gateway and then to the crowd mobility database for performing queuing behaviour data analytics described in Section IV. The designed software components are running as background processes on Raspberry Pi version 3 which has a built-in BLE module. The default parameters in our system are explained as follows. We consider seconds for data aggregation and for the low-pass filter in the data preprocessing component. Our system uses a fixed size of time window of 60 seconds. The number of backtracking time windows for extracting RSSI stability (in feature 4, feature 5, and feature 6) is time windows which leads to backtracking for 480 seconds. The pre-defined threshold for extracting the approaching-counter patterns (i.e., feature 2) is dbm. The threshold for extracting near-counter RSSIs (i.e., feature 3) is dbm. Finally, we consider and for extracting nobility similarity (i.e., feature 8).

We conduct two types of experiments: (a) controllable experiments with simulated queuing behaviour and (b) real-world experiments with real human queuing behaviour in a team-building social event. In the controllable experiments, we compare the performance resulted from different classifiers. In the real-world experiments, we compare our signal-based approach against the camera-based face detection approach using OpenCV [15].

First, we conduct controllable experiments with a simulated queue to verify our system, where Fig. 6 (c) shows the experimental setup. Three sniffers are deployed in a big conference room. We deploy 7 BLE beacons on a straight-lined paper to simulate a queue. We pull the straight-lined paper from the starting point of the queue every 120 seconds to simulate human movements along the queue. We use three classifiers, J48Tree, RandomForest, and Naive Bayes, based on Weka [7] implementation to evaluate the performance of the queuing behaviour classification. We change the number of backtracking time windows, the size of time window, and the number of sniffers to study how these parameters affect the performance of our system.

Then, moving towards a more realistic environment, the same setup is used in a real-world team building party, where 11 people gather together in an indoor conference room. In the beginning of the team building party, people queue up for taking cakes, biscuits, and drinks, where people casually talk with each other while waiting in the queue. We also deploy a camera in front of the cakes and record videos during the social event.

Fig. 9: Accuracy using different numbers of sniffers.
(a) (b)
Fig. 10: Experimental results with a real human queue.

V-B Experimental Results in a Simulated Queue

First, we vary the number of backtracking time windows in the feature extraction component from 2 to 12. As shown in Fig. 7, RandomForest provides the best accuracy compared to J48Tree and Naive Bayes. We can see that the accuracy resulted from all classifiers increases gradually as backtracking time window increases slightly and becomes stable. This is because a longer backtracking time window involves more mobility information to compute RSSI variances, mobility similarity, and mobility correlation.

Then, we vary the size of time window from 60 to 360 seconds to extract features. Fig. 8 shows the accuracy of queuing behaviour detection. We can see that the accuracy decreases as the size of time window increases from 60 to 240 seconds and then increases gradually when the size of time window varies from 240 to 360 seconds. This is because a longer time window considers sufficient RSSIs to compute RSSI variances, mobility similarity, and mobility correlation. However, a too long time window may incur larger RSSI variances because crowds make movements. In this case, RSSI variances cannot provide sufficient insightful information to differentiate in-queue and not-in-queue statuses.

Next, we compare the detection accuracy using only 1 sniffer and 3 sniffers. Fig. 9 shows the experimental results. As it can be seen, multi-sniffer approaches improve the detection accuracy up to . However, the accuracy provided by the Naive Bayes approach cannot be improved since some binary features are considered for model training and Naive Bayes approaches generally work well with numerical data which is used to estimate a distribution over continuous values.

V-C Experimental Results with a Real Human Queue

Finally, we compare our signal-based approach against the camera-based approach in a team-building social event in the real world. During the social event, we record the ground truth by human observations. The recorded video contains 21989 image frames. We use OpenCV face detection libraries to count people in the queue for each frame. Fig. 10 (a) shows the experimental results. We can see that the camera-based face detection approach provides lower accuracy since most of people in the queue are blocked by the people who are closer to the starting point. As shown in Fig. 10 (b), the number of detected faces does not change too much when the queue length becomes shorter. Generally, camera-based approaches are used for crowd mobility monitoring. However, it has limitations to detect queuing behaviour because of blocking visibility issues. We can conclude that signal-based approaches are more suitable for queuing behaviour detection especially for long queues without suffering from blocking issues.

Vi Conclusion

This paper exploits RSSIs captured by multiple signal sniffers to classify if people are in a queue. We propose three types of feature patterns extracted from each individual’s device, cross-device mobility similarity, and cross-sniffer mobility correlation for classification model training. Our approach can be applied to both Wi-Fi and BLE sensing systems. The experimental results indicate that our approach can reach minimum accuracy of and it significantly outperforms the camera-based face detection approach since wireless signals are not blocked by crowds.

Vii Acknowledgment

This work has been partially funded by the European Union’s Horizon 2020 research and innovation programme within the project “Worldwide Interoperability for SEmantics IoT” under grant agreement Number 723156.

References

  • [1] H. Li, E. C. Chan, X. Guo, J. Xiao, K. Wu, and L. M. Ni, “Wi-counter: Smartphone-based people counter using crowdsourced wi-fi signal data,” Human-Machine Systems, IEEE Transactions on, vol. 45, no. 4, pp. 442–452, 2015.
  • [2] K. Li, C. Yuen, and S. Kanhere, “Senseflow: An experimental study of people tracking,” in Proceedings of the 6th ACM Workshop on Real World Wireless Sensor Networks, ser. RealWSN ’15.   New York, NY, USA: ACM, 2015, pp. 31–34. [Online]. Available: http://doi.acm.org/10.1145/2820990.2820994
  • [3] D. Santani, J.-I. Biel, F. Labhart, J. Truong, S. Landolt, E. Kuntsche, and D. Gatica-Perez, “The night is young: Urban crowdsourcing of nightlife patterns,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp ’16.   New York, NY, USA: ACM, 2016, pp. 427–438. [Online]. Available: http://doi.acm.org/10.1145/2971648.2971713
  • [4] A. Mashhadi, U. G. Acer, A. Boran, P. M. Scholl, C. Forlivesi, G. Vanderhulst, and F. Kawsar, “Exploring space syntax on entrepreneurial opportunities with wi-fi analytics,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp ’16.   New York, NY, USA: ACM, 2016, pp. 658–669. [Online]. Available: http://doi.acm.org/10.1145/2971648.2971745
  • [5] F.-J. Wu and H. B. Lim, “UrbanMobilitySense: A user-centric participatory sensing system for transportation activity surveys,” IEEE Sensors Journal, vol. 14, no. 12, pp. 4165–4174, 2014.
  • [6] D. Riboni, T. Sztyler, G. Civitarese, and H. Stuckenschmidt, “Unsupervised recognition of interleaved activities of daily living through ontological and probabilistic reasoning,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp ’16.   New York, NY, USA: ACM, 2016, pp. 1–12. [Online]. Available: http://doi.acm.org/10.1145/2971648.2971691
  • [7] “Weka,” http://www.cs.waikato.ac.nz/ml/weka/.
  • [8] B. Barbagli, L. Bencini, I. Magrini, G. Manes, and A. Manes, “A traffic monitoring and queue detection system based on an acoustic sensor network,” International Journal on Advances in Networks and Services, vol. 4, no. 1-2, pp. 27–37, 2011.
  • [9] Y. Lu, S. Xiang, and W. Wu, “Taxi queue, passenger queue or no queue?: A queue detection and analysis system using taxi state transition,” in International Conference on Extending Database Technology (EDBT), 2015, pp. 23–27.
  • [10] M. Zanin, S. Messelodi, and C. Modena, “An efficient vehicle queue detection system based on image processing,” in International Conference on Image Analysis and Processing, 2003, pp. 232–237.
  • [11] R. K. Satzoda, S. Suchitra, T. Srikanthan, and J. Y. Chia, “Vision-based vehicle queue detection at traffic junctions,” in IEEE Conference on Industrial Electronics and Applications (ICIEA), 2012, pp. 90–95.
  • [12] Q. Li, Q. Han, X. Cheng, and L. Sun, “QueueSense: Collaborative recognition of queuing on mobile phones,” in International Conference on Sensing, Communication, and Networking (SECON)., 2014, pp. 230 – 238.
  • [13] Q. Li, Q. Han, and L. Sun, “Collaborative recognition of queuing behavior on mobile phones,” IEEE Trans. Mobile Computing, vol. 15, no. 1, pp. 60–73, 2016.
  • [14] Y. Wang, J. Yang, Y. Chen, H. Liu, M. Gruteser, and R. P. Martin, “Tracking human queues using single-point signal monitoring,” in ACM Int’l Conf. on Mobile Systems, Applications, and Services, 2014, pp. 42–54.
  • [15] OpenCV, “Opencv,” http://opencv.org/.