Characterizing driving behavior using automatic visual analysis

03/13/2015 ∙ by Mrinal Haloi, et al. ∙ IIIT Bangalore ERNET India 0

In this work, we present the problem of rash driving detection algorithm using a single wide angle camera sensor, particularly useful in the Indian context. To our knowledge this rash driving problem has not been addressed using Image processing techniques (existing works use other sensors such as accelerometer). Car Image processing literature, though rich and mature, does not address the rash driving problem. In this work-in-progress paper, we present the need to address this problem, our approach and our future plans to build a rash driving detector.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

India is one of the most accident prone country, where according to the NCRB report 135,000 died in 2013 and property damage of worth $ 20 billion [1]. Many a time, accidents and unusual traffic congestion take place due to careless and impatient nature of drivers. In most cases drivers don’t follow lane rules, traffic rules leading to traffic congestion and accidents. Taking effective measure on traffic situation [5, 2] and driver behaviour [15] can prevent accidents and congestion.

A developing country like India needs an effective traffic monitoring and management system. Towards this we propose a visual-analysis-based driving behaviour monitoring system. The visual analysis includes the acceleration, lane-changing, and distance-maintaining behaviour (both from a near-by car and pedestrians). To enhance traffic safety, making road accident and congestion free, cab companies including government and private can adapt our system. Public transportation department can install this system in buses and other vehicles and in traffic junction for monitoring.

In developed countries like U.S.A, with the gradual emergence of autonomous driving research, efforts are on to build a smart driving system that can drive more safely without any fatigue, as compared to humans can be programmed to follow traffic rules. Even for these automatic cars, modelling self-driving behavior by considering distances of surrounding cars and detecting pedestrians is very relevant.

In this work, we use a single wide angle camera sensor for capturing surrounding environment for visual analysis of other drivers behaviour and detecting nearby obstacles. From this data, informative features namely fast side-ways and forward acceleration, wrong direction driving, frequent lane changing, getting-close-to-other-cars-and-pedestrians behavior are computed by using visual analysis techniques. From this collection of features, rash driving behavior can be detected. In Figure 1 we have depicted different possible scenarios for rash driving detection using camera on different infrastructures.

Figure 1: Rash driving Scenarios

The solution to the problem of rash-driving detection using visual analysis is a novel contribution (as compared to [7]). Also, it is a socially-relevant problem in the Indian context. So far, we have defined and extracted the relevant visual features on publicly available datasets. We have collected a small sample of data in the city of Bangalore for some intial experiments. In the future, we plan to record more videos by tieing-up with professional drivers to collect a new dataset, to test and advance this initial approach. We also plan to work with government agencies who are interested in sharing the data of traffic junctions in Bangalore.

2 Related Work

The related literature can be classified into three categories. First, the works on Image Processing and Computer Vision using single or multiple camera facing the road. Second, driver behaviour understanding using a camera facing the driver. Finally, a limited literature on rash-driving, albeit not using Image Processing.

In the first category, we have works on advanced driver assistance system, traffic safety, autonomous vehicle navigation and driver behaviour modelling using mutiple cameras, LIDAR, RADAR sensor etc. These works focus on using image processing and learning based method for lane detection, road segmentation, traffic signs detection and recognition, 3D modelling of road environment (e.g. [11, 2, 5, 17]) Parallax flow computation was used by Baehring et al. for detecting overtaking and close cutting vehicles [2]. For detecting and avoiding collison, Hong et al. had used Radar, LIDAR, camera and omnidirectional camera repectively [5, 4]

. They focused on detecting using LIDAR sensor data classifying object as static and dynamic and tracking using extended Kalman filter and for getting a wide view of surrounding situation. For detection of forward collision Srinivasa et al. have used forward looking camera and radar data

[17].

Regarding the second and third category, the literature is fairly limited. In some works driver inattentiveness was modelled using fatigue detection, drowsiness, eye tracking, cell phone usage etc. Ji et al.[13] presented tracking method for eye, gaze and face pose and Hu et al.[12] used SVM based method for driver drosiness detection . Trivedi et al modelled driver behavior using head movements for detecting driver gaze and distraction, targetting adavanced driver safety [19, 20]. Using accelerometer and orientation sensor data [7], rash driving warning system was developed as a mobile application.

3 Characterizing rash driving

Rash drivers generally tend to accelerate quickly side-ways and in forward direction. They change lanes frequently and get dangerously close to others vehicles and people. In this section we describe our rash driving estimation algorithm, as visualized in Fig.[2]. From video we take two consecutive frames for extracting features. This features will acts as a input for rash driving algorithm which will be based on thresholding of features values. If rash driving detected we will extract number plate of the car, otherwise will run this algorithm for next consecutive frames.

Figure 2: Our rash driving detection algorithm

3.1 Fast side-ways and forward Acceleration

Rapid acceleration also contributes to rash driving. By computing optical flow we can estimate horizontal and vertical flow change of road environment. Frequent change in horizontal flow in the regions of detected cars result of rash lane changing and vertical flow change can give knowledge about relative velocity change of test car with respect to surrounding cars. From optical flow of surrounding region we predict the rash behaviour of other cars. The exact procedure for computing the discrete flow is described below.

3.1.1 Discrete Flow Computation

Optical flow is a measure of pixel velocity in two frame of a video. Below we have presented objective function for optical flow [18] computation. E(u,v) = ∑i,j ρD(I1(i,)) - I2(i + ui,j,j+vi,j)+ λ[ρs(ui,j-ui+1,j) + ρs(ui,j-ui,j+1) + ρs(vi,j-vi+1,j)+ρs(vi,j - vi,j+1)] where u and v are respectively horizontal and vertical component of velocity for frame and , is a regularization parameter based on expected smoothness of the flow field and , are two functions. Values of u and v are calculated by optimizing E(u,v) term.

Figure 3: Optical flow characteristics

3.2 Wrong direction driving

In Indian conditions, vehicles coming in wrong direction is also another frequent case of rash driving or rather nuisance. Wrong direction driving is easily estimated by observing anomolies in optical flow in lanes.

3.3 Frequent lane change detection

Another characteristic of rash drivers is frequent lane changing. We have used a robust illuminant invariant lane detection system in our work using inverse perspective projection [3]

and cubic interpolation with RANSAC curve fitting

[10]. We have assumed a parabolic road model. From road lane fitting, relative position of other vehicles with respect to lanes can be estimated. Also from our lane detection algorithm, deperture angle of test car from current lane can be computed.

Figure 4: Camera setup and Road model

We have used Lab color space for seperating color and illuminant part of images for better detection of lane lines using 2nd adn 4th order steerable filters. In Fig.[4] we presented camera setup and assumed road model.

Figure 5: Lane detection algorithm result

3.4 Driving-close-to-vehicles and people-in-front behaviour

Not maintaining a proper distance from nearby cars or people in pedestrains is also a facet of rash-driving. We have used HOG feature based deformable part model for detecting and locating other cars and pedestrians with respect to lane lines. For detecting and locating object in image we wil use pyramid based template matching method, where we train car and person model using deformable part model [9, 8] based on HOG [6] feature. This method can detect car and people very efficiently under occlusion also. Latent SVM trained model is shown in Fig.[6]; and detected car and people is shown in Fig.[7] (Reference for the images [11]).

Figure 6: Root car and person models and its part,[9, 8]
Figure 7: Car and person detected using above deforable model

3.4.1 Pinhole Camera Model

For determining distance of obstacle from test car we will use pinhole camera model, this model can give good accuracy for object in front of test car. From error analysis we set different offset for approximately measuring distance.

If a 3D point P= and its pinhole projected point is , there relation[16] is given by following equation x = ϕx(ω11u+ω12v+ω13w+τx) + λ(ω21u+ω22v+ω23w+τy)ω31u+ω32v+ω33w+τz + δ_x y = ϕy(ω21u+ω22v+ω23w+τy)ω31u+ω32v+ω33w+τz + δ_y where intrinsic matrix is given by

Rotation matirix of camera can be given by

Translation matix as

Finally, employing all the features described, we make a estimate of rash-driving. For now, our proposed system is rule-based. In the future, we will collect samples with and without rash driving, using professional drivers. Using a generative machine learning approach, we can build a probabilistic model to predict rash-driving. We are also considering recording data with naive volunteers, and manually annotating parts of the data where rash-driving tendencies are seen, so as to validate the model.

4 Conclusions

In this paper we have described a visual analyis method to characterize driving behavior, with a specic focus on rash-driving. Our algorithm is based on calibrated single camera images. The methods are general enough to work on cameras placed on cars as well as on infrastructure. In the future, we plan to integrate this module with the automatic number plate detection and recognition module (as in [14]) for a traffic monitoring application. Though our work is ongoing and preliminary, we believe such a system can have a good societal impact. As described in Section 1 Introduction, we will record a rash-driving dataset in Indian conditions, and test our methods. We will also make a requirements study with government agencies.

References

  • [1] http://www.americanbazaaronline.com/2013/08/21/road-to-hell-every-3-7-minutes-death-swoops-in/. 2013.
  • [2] D. Baehring et al. Detection of close cutin and overtaking vehicles for driver assistance based on planar parallax. IEEE Intell. Veh. Symp., pages 261–266, June 2005.
  • [3] M. Bertozzi et al. Real-time lane and obstacle detection on the gold system. Intelligent Vehicles Symposium, Proceedings of the IEEE, 19-20:213–218, 1996.
  • [4] H. Cheng et al. Enhancing a drivers situation awarness using a global view map. Multimedia and Expo, 2007 IEEE International Conference on, pages 1019–1022, July 2007.
  • [5] H. Cheng et al. Interactive road situation analysis for driver assistance and safety warning systems: Framework and algorithms. IEEE Transactions on intelligent transportation systems, 8, March 2007.
  • [6] N. Dalal et al. Histograms of oriented gradients for human detection.

    In Proceedings of the Conference on Computer Vision and Pattern Recognition

    , pages 886–893, 2005.
  • [7] P. Dhar et al. Unsafe driving detection system using smartphone as sensor platform. International Journal of Enhanced Research in Management & Computer Applications, 3:65–70, March 2014.
  • [8] P. Felzenszwalb et al. Discriminatively trained deformable part models, release 5. CVPR, 2008.
  • [9] P. Felzenszwalb et al. Object detection with discriminatively trained part based models. PAMI, 2009.
  • [10] M. Fischler et al. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM.
  • [11] S. Houben et al. Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark.

    International Joint Conference on Neural Networks

    , 2013.
  • [12] S. Hu et al.

    Driver drowsiness detection with eyelid related parameters by support vector machine.

    Expert Systems with Applications,Elsevier.
  • [13] Q. Ji et al. Real-time eye, gaze, and face pose tracking for monitoring driver vigilance. Real-Time Imaging,Elsevier.
  • [14] C. Nikolaos et al. A license plate-recognition algorithm for intelligent transportation system applications. IEEE Transactions on Intelligent Transportation Systems, 7, 2006.
  • [15] S. Park et al. Driver activity analysis for intelligent vehicles: Issues and development framework. In Proc. of IEEE Intelligent Vehicles, June 2005.
  • [16] S. Prince. Computer Vision: Models Learning and Inference. Cambridge University Press, 2012.
  • [17] N. Srinivasa et al. A fusion system for real-time forward collision warning in automobiles. IEEE Intell. Transp. Syst., 1:457–462, 2003.
  • [18] D. Sun et al. A quantitative analysis of current practices in optical flow estimation and the principles behind them. International Journal of Computer Vision, pages 115–137, 2014.
  • [19] A. Tawari et al. Continuous head movement estimator (cohmet) for driver assistance: Issues, algorithms and on-road evaluations. IEEE Transactions on Intelligent Transportation Systems, 2014.
  • [20] A. Tawari et al. Looking-in and looking-out vision for urban intelligent assistance: Estimation of driver attention and dynamic surround for safe merging and braking. IEEE Intelligent Vehicles Symposium, 2014.