An Improved Algorithm for Eye Corner Detection

by   Anirban Dasgupta, et al.

In this paper, a modified algorithm for the detection of nasal and temporal eye corners is presented. The algorithm is a modification of the Santos and Proenka Method. In the first step, we detect the face and the eyes using classifiers based on Haar-like features. We then segment out the sclera, from the detected eye region. From the segmented sclera, we segment out an approximate eyelid contour. Eye corner candidates are obtained using Harris and Stephens corner detector. We introduce a post-pruning of the Eye corner candidates to locate the eye corners, finally. The algorithm has been tested on Yale, JAFFE databases as well as our created database.



There are no comments yet.


page 1

page 3

page 4

page 5


Neural Network Approach for Eye Detection

Driving support systems, such as car navigation systems are becoming com...

Eye detection in digital images: challenges and solutions

Eye Detection has an important role in the field of biometric identifica...

Eye-focused Detection of Bell's Palsy in Videos

In this paper, we present how Bell's Palsy, a neurological disorder, can...

mEBAL: A Multimodal Database for Eye Blink Detection and Attention Level Estimation

This work presents mEBAL, a multimodal database for eye blink detection ...

Hybrid eye center localization using cascaded regression and hand-crafted model fitting

We propose a new cascaded regressor for eye center detection. Previous m...

Adopting level set theory based algorithms to segment human ear

Human identification has always been a topic that interested researchers...

An Adaptive Algorithm for Precise Pupil Boundary Detection using Entropy of Contour Gradients

Eye tracking spreads through a vast area of applications from ophthalmol...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Eye corners are the regions where the upper and lower eyelids meet [1]. The two corners are technically named as temporal and nasal canthus as shown in Fig. 5. In applications, where eye movements are analyzed, eye corners serve as the reference points [2]. The reason behind this is that the eye corners are more stable than the iris since its shape or orientation is not affected by the gaze direction or the state of eye closure.

Eye Corner detection has been a challenging problem in computer vision owing to the following issues.

  • eye corner in an image is not necessarily a single pixel

  • nasal eye corners can be occluded by nose

  • illumination effects and shadows may remove information related to the corner

The above issues make eye corner detection, a more challenging task than a classical corner detection in computer vision. The important works which aim at addressing the eye corner detection issue are provided in Table I.
This work attempts to improve the Santos and Proenca Method [3] for eye corner detection, thereby addressing some of the above issues. The significant contributions of this work are as follows:

  • An online framework for eye corner detection

  • Improvement of the Santos and Proenka Method for webcam quality images

  • Post pruning of eye corner candidates

Author Work
Xia et al. [2] Variance Projection Functions
Zhu et al. [1]
Contour extraction and ellipse
Xu et al. [4]

Semantic Features extraction

Xu et al. [5]
Improved Local Projection
Functions and Circle Integral
Santos and Proenca [3]
Eye contour approximation on
sclera segmented eye image
TABLE I: Earlier Works
Fig. 1: Proposed Overall Framework
Fig. 2: The overall scheme
Fig. 3: Face and Eye Detection
Fig. 4: Eye corner Detection

Ii Santos and Proenca Method for Eye Corner Detection

In the Santos and Proenca Method [3], an eye image is used as an input to the detector. First, they obtain a noise-free iris binary segmentation mask from the eye image. The next step is segmentation of the sclera region. The sclera being the most unsaturated region in the eye image, an HSV transformation yields the lowest magnitudes on the saturation plane. The iris and sclera being segmented out, the next stage constitutes of the approximation of the eyelids contour. This is achieved using morphological dilation of the iris segmentation mask with a horizontal structuring element is carried out. This expands the iris regions horizontally. Finally, point-by-point multiplication between the dilated and the enhanced data provides a good approximation to the eyelids contour. The subsequent step comprises of the generation of a set of candidate eye corner positions, which was performed using the Harris and Stephens corner detector.
This method works well for an image having proper resolution and clarity. However, for inferior quality images, such as those obtained using a standard webcam, the performance of such methods is limited.

Fig. 5: Temporal and Nasal Canthus

Iii Proposed Algorithm

Our proposed scheme is a real-time camera-based system depicted as a schematic in Fig. 1. For image acquisition, the scheme uses a standard webcam of resolution

pixels. The algorithm has been implemented in real-time using a Sony PS3 Eye webcam at 30 fps. First, face detection is carried out in a given frame, followed by eye detection. Eye detection is confined to a Region of Interest (ROI) in the detected face area. The RoI is selected based on our previous work

[6]. This step finds each eye separately. The algorithm has been made robust using some preprocessing techniques for illumination correction.

Iii-a Image Enhancement

Preprocessing of the image is necessary for filtering out the noise and preserving relevant information. The filtering is particularly essential as the eye corner is sometimes not visible clearly because of illumination conditions. The pre-processing begins by equalizing the histogram of the image. We have used Contrast Limited Adaptive Histogram Equalization (CLAHE) technique [7] for this purpose. In this method, the image is fractioned into small blocks of size

. Each of these blocks undergoes histogram equalization. This confines the histogram to a small region. An issue of this method is the amplification of noise if present. This issue is avoided by contrast limiting. After the CLAHE operation, bi-linear interpolation is applied to remove artifacts in tile borders.

Iii-B Face and Eye Detection

The face and eye detection forms the first step in the algorithm for the localization of the eye corners. A classifier based on Haar-like features was selected for this stage. For optimal use, we have used parameters based on the earlier work [8]. Once the face is detected, an ROI is selected from the facial region. The ROI selection scheme has been reported in the previous work [9]. Now, the search for eyes is confined to a reduced area, which also reduces computation and improves the speed.

Iii-C Eye Corner Detection

The corner detection operates on the detected eye region.

Iii-C1 Sclera Segmentation

As proposed in [3], the sclera is segmented out by converting the RGB image to HSV and subsequently thresholding the saturation plane. For grayscale images, the color space conversion is not required, and the thresholding operation can be applied directly. The idea behind this lies in the fact that the sclera is the most unsaturated segment in the eye image. There remain certain noisy pixels because of the blood vessels in the sclera. This issue is addressed using the morphological opening of the sclera portion using an elliptical mask.

Iii-C2 Eyelid Contour Approximation

The sclera region is hence segmented out with the morphological post-processing. The boundary of the mask is overlaid on the eye image to obtain the eyelid contour approximation.

Iii-C3 Eye Corner Candidate selection

The corner score is obtained by the sum of squared differences (SSD), between the corresponding pixels of two patches in the eyelid contour image . The differential of the corner score is considered for finding out the corner candidates. A circular window is used, to make the response isotropic.


With Taylor series expansion and proper approximation, we have


The Harris matrix, is obtained as


Iii-C4 Post Pruning

Since the actual eye corners will lie at the extreme ends of the eyelid contour, the eye corner pair having the farthest distance are selected as the eye corners. In cases of a tie, the mean corner is chosen as the correct corner estimate, among the corner candidates bearing equal distances.

Fig. 6: Detection Results for images taken (a) Yale Database (b) JAFFE Database

Iv Results

A face database of 30 subjects was prepared using a standard Sony PS3 web camera for testing the algorithm. Some sample images of the database is shown in Fig. 7. The algorithm has also been tested on Yale Face Database [10] and JAFFE Database [11]. A sample set of 200 images total were randomly selected from the databases. The percentage of mean-squared pixel error in eye localization has been provided in Table II for the different databases. The error is eye corner localization is based on the manual marking of the end-points. The processing-rate of the algorithm with online testing was found to be 16.2 fps while on offline database, it was 20.4 fps. The speed may be improved by using GPU based implementations, and employing parallel schemes.

Fig. 7: Sample images from our created database
Database % Squared pixel error
JAFFE database 4.5
Yale Face database 6.5
IITKGP database 8.9
TABLE II: Percentage Error in eye corner localization

V Conclusion

In this paper, we propose an algorithm that uses a standard web camera to localize effectively the eye corner. This is an improvisation over the Santos and Proenca Method. The significant modification lies in the post-pruning the eye corner candidates. The method has less than 10% localization errors in all the three tested databases. A future scope of the work may be testing of the algorithm of infrared and near infra-red images [12], and make appropriate modifications so that the applications of the algorithm can be extended to areas where night vision is preferable.


The authors would like to acknowledge the subjects for participation in the experiment.


  • [1] J. Zhu and J. Yang, “Subpixel eye gaze tracking,” in Automatic face and gesture recognition, 2002. proceedings. fifth ieee international conference on.   IEEE, 2002, pp. 124–129.
  • [2] X. Haiying and Y. Guoping, “A novel method for eye corner detection based on weighted variance projection function,” in Image and Signal Processing, 2009. CISP’09. 2nd International Congress on.   IEEE, 2009, pp. 1–4.
  • [3] G. Santos and H. Proença, “A robust eye-corner detection method for real-world data,” in Biometrics (IJCB), 2011 International Joint Conference on.   IEEE, 2011, pp. 1–7.
  • [4] C. Xu, Y. Zheng, and Z. Wang, “Semantic feature extraction for accurate eye corner detection,” in Pattern Recognition, 2008. ICPR 2008. 19th International Conference on.   IEEE, 2008, pp. 1–4.
  • [5] G. Xu, Y. Wang, J. Li, and X. Zhou, “Real time detection of eye corners and iris center from images acquired by usual camera,” in Intelligent Networks and Intelligent Systems, 2009. ICINIS’09. Second International Conference on.   IEEE, 2009, pp. 401–404.
  • [6] A. Dasgupta, A. George, S. Happy, A. Routray, and T. Shanker, “An on-board vision based system for drowsiness detection in automotive drivers,” International Journal of Advances in Engineering Sciences and Applied Mathematics, vol. 5, no. 2-3, pp. 94–103, 2013.
  • [7] M. Singvi, A. Dasgupta, and A. Routray, “A real time algorithm for detection of spectacles leading to eye detection,” in Intelligent Human Computer Interaction (IHCI), 2012 4th International Conference on.   IEEE, 2012, pp. 1–6.
  • [8] S. Gupta, A. Dasgupta, and A. Routray, “Analysis of training parameters for classifiers based on haar-like features to detect human faces,” in Image Information Processing (ICIIP), 2011 International Conference on.   IEEE, 2011, pp. 1–4.
  • [9] A. Dasgupta, A. George, S. Happy, and A. Routray, “A vision-based system for monitoring the loss of attention in automotive drivers,” Intelligent Transportation Systems, IEEE Transactions on, vol. 14, no. 4, pp. 1825–1838, 2013.
  • [10] K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, no. 5, pp. 684–698, 2005.
  • [11] M. J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, and J. Budynek, “The japanese female facial expression (jaffe) database,” 1998.
  • [12] S. Happy, A. Dasgupta, A. George, and A. Routray, “A video database of human faces under near infra-red illumination for human computer interaction applications,” in Intelligent Human Computer Interaction (IHCI), 2012 4th International Conference on.   IEEE, 2012, pp. 1–4.