Towards Real-time Eyeblink Detection in The Wild:Dataset,Theory and Practices

02/21/2019 ∙ by Guilei Hu, et al. ∙ 8

Effective and real-time eyeblink detection is of wide-range applications, such as deception detection, drive fatigue detection, face anti-spoofing, etc. Although numerous of efforts have already been paid, most of them focus on addressing the eyeblink detection problem under the constrained indoor conditions with the relative consistent subject and environment setup. Nevertheless, towards the practical applications eyeblink detection in the wild is more required, and of greater challenges. However, to our knowledge this has not been well studied before. In this paper, we shed the light to this research topic. A labelled eyeblink in the wild dataset (i.e., HUST-LEBW) of 673 eyeblink video samples (i.e., 381 positives, and 292 negatives) is first established by us. These samples are captured from the unconstrained movies, with the dramatic variation on human attribute, human pose, illumination condition, imaging configuration, etc. Then, we formulate eyeblink detection task as a spatial-temporal pattern recognition problem. After locating and tracking human eye using SeetaFace engine and KCF tracker respectively, a modified LSTM model able to capture the multi-scale temporal information is proposed to execute eyeblink verification. A feature extraction approach that reveals appearance and motion characteristics simultaneously is also proposed. The experiments on HUST-LEBW reveal the superiority and efficiency of our approach. It also verifies that, the existing eyeblink detection methods cannot achieve satisfactory performance in the wild.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 12

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Eyeblink detection is of essential research value for the application areas of deception detection [1], drive fatigue detection [2], face anti-spoofing [3], dry eye syndrome recovery [4], etc. During the past decades, numerous of efforts [5, 6, 7, 8, 9, 10, 11, 12] have already been paid to this. Nevertheless, most of them are proposed without considering the case of eyeblink in the wild. Meanwhile, the existing eyeblink detection datasets [3, 13, 14, 15] are generally captured under the constrained indoor conditions with the relative consistent subject and environment setup. However, towards some practical application scenarios eyeblink detection in the wild is actually more preferred. For instance, during the phase of deception detection the eyeblink visual data may be surreptitiously collected using the hidden cameras under the unconstrained indoor or outdoor conditions [1]. In this case, effective and real-time eyeblink detection approach in the wild is essentially required to ensure the performance. Unfortunately, to our knowledge this research problem has not been well studied before. As consequence, our main research motivation is to facilitate this research task in terms of dataset, theory and practices.

Figure 1: The essential challenges towards eyeblink detection in the wild. The shown snapshots within HUST-LEBW dataset are captured from the unconstrained movies.

To this end, we first establish a challenging labelled eyeblink in the wild dataset termed HUST-LEBW. It consists of 673 eyeblink video clip samples (i.e., 381 positives, and 292 negatives) that captured from the unconstrained movies to reveal the characteristics of “in the wild”. Each positive sample covers one whole eyeblink process that corresponds to the eye status sequence of “eye openeye closeeye open”. To our knowledge, HUST-LEBW is the first eyeblink in the wild dataset that involves the spatial-temporal sequence information. Fig. 1 shows some snapshots of the eyeblink samples within HUST-LEBW. It can be observed that, dramatic variation on human attribute, human pose, illumination, imaging viewpoint, and imaging distance exists. For instance, from the human attribute perspective the subjects involved in HUST-LEBW are of different ages, genders, races and skin colors. Meanwhile, the humans may wear glass or not. This actually imposes great challenges to accurate eyeblink detection, both for eye location and eyeblink verification.

  Dataset Video clip amount Resolution Person No. Person race Person age Person sex Person sight Scene Illumination Imaging view Imaging distance
  ZJU [3] 80 320240 20 Asian young female frontal indoor good frontal fixed
(10877 frames) middle-aged male upward stable
  Eyeblink8 [13] 8 640480 4 Caucasian young female frontal indoor good frontal fixed
(70992 frames) middle-aged male
  Talking face [14] 4 720576 1 Caucasian middle-aged male frontal indoor good front fixed
(5000 frames) stable
  Silesian5 [15] 5 640480 5 unknown unknown unknown unknown indoor good frontal fixed
(10877 frames) stable
  HUST-LEBW 673 1280720 172 Asian child female variational indoor variational variational variational
(8749 frames) 1456600 Caucasian young male outdoor
Melanoderm middle-aged
elderly
Table I: The attribute comparison among the proposed HUST-LEBW dataset and the existing eyeblink detection related datasets.

Next, we propose to formulate eyeblink detection in the wild task as a binary spatial-temporal pattern recognition problem. In particular, a data-driven based real-time eyeblink detection approach that involves 2 stages of eye localization and eyeblink verification is proposed by us. During the spatial eye localization phase, the eye region is first detected using the off-the-shelf SeetaFace face parsing engine [16], and then tracked by KCF tracker [17]

to ensure the high running speed. Then towards eyeblink verification, Long Short Term Memory (LSTM) neural network is employed to model the temporal sequential procedure of eyeblink. Due to the issue that eyeblink may happen with the different time duration, we modify the architecture of LSTM to take multi-scale temporal information of eyeblink into consideration.

Meanwhile, a feature extraction approach able to capture the appearance and motion information of eyeblink simultaneously is also proposed by us. In particular, uniform Local Binary Pattern (LBP) [18] visual descriptor is extracted to reveal the appearance property of eye region. And, the feature difference between the uniform LBPs from 2 consecutive frames is used to encode the motion property of eyeblink. The appearance and motion feature is concatenated as the input of LSTM.

Extensive experiments are then carried out on HUST-LEBW. The comparison with the state-of-the-art approaches demonstrates the superiority of our method on eyeblink in the wild detection, and its real-time running capacity. And we also notice that, the overall performance of the existing methods (including ours) on HUST-LEBW is actually not satisfactory enough. This indeed verify the great challenge of eyeblink detection in the wild.

The main contributions of this paper include:

HUST-LEBW: the first eyeblink detection dataset that involves temporal sequential information towards “in the wild” cases. It involves 673 video samples (i.e., 381 positives, and 292 negatives);

A modified LSTM architecture able to capture multi-scale temporal information is proposed to model the eyeblink detection task as a spatal-temporal pattern recognition problem;

An uniform LBP-based eyeblink feature extraction method is proposed. It captures the appearance and motion information simultaneously.

HUST-LEBW will be released online upon acceptance to facilitate the related research.

The remaining of this paper is organized as follows. Sec. II discusses the related work. The established HUST-LEBW dataset is introduced in Sec. III. Then, the proposed eyeblink detection method in the wild is illustrated in Sec. IV. The essential implemetation details of the proposed eyeblink detection method are given in Sec. V. Experiments and discussions are conducted in Sec. VI. Sec. VII concludes the whole paper.

Figure 2: Some eyeblink sample frames from the existing ZJU [3], Eyeblink8 [13], and Talking face [14] datasets.

Ii Related Work

In this section, we will introduce and discuss the related work towards eyeblink detection in the wild in terms of dataset, eyeblink verification and eye localization respectively.

Eyeblink detection dataset. Although numerous of efforts have already been paid to address eyeblink detection problem, the available public datasets are still not abundant. ZJU [3], Eyeblink8 [13], Talking face [14] and Silesian5 [15] are the representative ones with the spatial-temporal video information. Nevertheless, all of the 4 datasets above generally targets on the constrained indoor cases as shown in Fig. 2. The involved samples are captured from the limited number of volunteers, with the relatively consistent scene, subject, illumination and imaging setup. As consequence, they cannot reveal the “in the wild” characteristics faced by some challenging application scenarios. And, the reported performance on these datasets is somewhat saturated (e.g., the detection rate of on ZJU and Silesian5. To facilitate the research on eyeblink detection in the wild, a more challenging dataset is indeed required. Accordingly, we propose to construct HUST-LEBW dataset in the way of collecting samples from the unconstrained live movies to essentially involve richer “in the wild” eyeblink information. Compared to ZJU, Eyeblink8, Talking face and Silesian5, the samples in HUST-LEBW are of much higher diversity towards scene, subject, illumination and imaging conditions. The detailed comparison among them is listed in Table I to verify this, in attributes of “person number”, “person race”, “person age”, “person sex”, “person sight”, “scene”, “illumination”, “imaging view”, and “imaging distance” respectively. Meanwhile, video clip amount and resolution is also listed. Hence, the severe attribute variation within HUST-LEBW will impose great challenges to accurate eyeblink detection.

Eyeblink verification. Towards the existing eyeblink verification approaches, we will introduce them from the perspectives of pattern recognition model, and feature extraction method respectively. First aiming to solve a binary pattern recognition problem, the existing eyeblink verification methods can be categorized into the heuristic and data-driven

paradigms. Specifically, the heuristic way executes eyeblink verification mainly according to the pre-defined decision rules. For instance, when human face has been detected in advance a variance map of the sequential images is extracted to reveal the motion information in 

[12]

. Eyeblink verification is then carried out via executing thresholding operation on it, in spirit of computing the salient motion pixel ratio. Template matching is first executed to estimate the eye state in 

[9]. In the way of observing the correlation coefficient change in time, eyeblink is identified when the correlation coefficient is below a pre-defined threshold. KLT trackers are placed over the eye region to extract the motion information of eyeblink in [13]. Eyeblink is consequently determined using the state machine with numerous of pre-defined threshold parameters. After acquiring the “open” and “close” status of eye using SVM, eyeblink is then confirmed according to the temporal contextual relationship between the resulting eye status in [19]. With continuous eye tracking, eyeblink is recognized by observing whether the eyes are covered by eyelids in [20]. Actually, the effectiveness of most of these approaches above highly relies on the adaptability of the pre-defined thresholds for decision making. As consequence, they tend to be sensitive to subject and environment variation. To enhance the generalization capacity, some other researchers resort to data-driven manner. Being incorporated with the discriminative measures on eye status, Conditional Random Field (CRF) is employed to model the eyeblink procedure for verification in [3]. By extracting the EAR feature to characterize the eye opening degree using eye landmarks, SVM is finally used to verify the occurrence of eyeblink in [21]. Actually, compared to the heuristic manner data-driven approach is relatively seldom studied. And, our proposition falls into the data-driven paradigm to use LSTM framework with strong sequential information processing capacity to model the spatial-temporal procedure of eyeblink.

Besides the patter recognition model, another essential issue for eyeblink verification is feature extraction. Generally speaking, appearance feature (e.g., EAR [21], LBP [22], Haar[23], or HOG [24]) or motion feature (e.g., KTL tracker motion [21] or pixel-wise frame difference between the consecutive 2 frames [9]) are extracted to this end. Nevertheless, few approaches take appearance and motion information into consideration simultaneously. To address this, we propose to use uniform LBP as appearance feature and its difference between the consecutive 2 frames as motion feature to jointly characterize eyeblink.

Eye localization. Accurate eye localization is the key step for eyeblink detection within spatial domain. Some existing approaches [6, 8, 5] resort to using color or spectral characteristics to locate eye. Another way is to use motion information [25] to detect and track eye. Nevertheless, their performance is not promising. Most of the state-of-the-art methods [9, 26, 21, 27, 28] resort to detect facial landmark to this end in the way of face parsing. To achieve the balance between effectiveness and efficiency, we choose use SeetaFace engine [16] for eye detection first, and then track eye using KCF [17] for high efficiency.

Figure 3: The main construction pipeline of HUST-LEBW dataset.
 Idx Name Filming location Style Premiere time
 1 A clockwork orange UK & USA Crime & thriller 1971-12-19
 2 The last emperor CN Drama 1987-10-23
 3 Farewell my concubine CN Drama & love 1993-01-01
 4 Chungking express CN Art 1994-07-14
 5 Léon FR & USA Action 1994-09-14
 6 Ashes of time CN Emotional ethics 1994-09-17
 7 The matrix USA Science fiction 1999-04-30
 8 Dragon buster CN Costume 2002-12-01
 9 The matrix reloaded USA Science fiction 2003-05-15
 10 Pirates of the Caribbean USA Adventure & magic 2003-07-09
 11 Kill Bill 1 CN & USA Action 2003-10-10
 12 The lord of the rings3 USA & NZ Fantasy & action 2003-12-01
 13 Blood diamond USA & DE Adventure 2006-02-06
 14 Memories of matsuko JP Drama & music 2006-05-29
 15 The bourne ultimatum USA Action & suspense 2007-08-03
 16 Game of thrones USA War & fantasy 2011-04-17
 17 A Chinese fairy tale CN Fantasy & love 2011-04-19
 18 Black mirror UK Science & thriller 2011-12-04
 19 Mad max 4 USA Action 2015-05-15
 20 Contratiempo ES Crime & suspense 2017-01-06
Table II: The main attribute information of the 20 different moves for HUST-LEBW construction.

Iii HUST-LEBW : A Labelled Dataset for Eyeblink Detection in The Wild

As shown in Fig. 1, eyeblink detection in the wild suffers from some essential challenges on variation on human attribute, human pose, illumination, imaging view and distance, etc. Nevertheless, the existing eyeblink detection datasets (e.g., ZJU [3], Talking face [14], Eyeblink8 [13], and Silesian5 [15]) cannot reveal the “in the wild” characteristics well as indicated in Table I and Fig. 2. To address this, we propose to build a labelled dataset for eyeblink detection in the wild (termed HUST-LEBW) to shed the light into this research field not well studied before. The essential difference between HUST-LEBW and the existing eyeblink detection datasets is that, we choose to collect eyeblink video clips from the unconstrained movies instead of from the limited number of volunteers under the indoor scene conditions. After capturing the eyeblink video clips from the movies, towards each frame the face region, point-wise eye location, and eye region will be annotated as shown in Fig. 3. Next, we will illustrate the construction procedure and characteristics of HUST-LEBW in details.

Iii-a Movie data source

Figure 4: The eyeblink video clip samples that correspond to the indoor and outdoor cases in HUST-LEBW dataset. Each eyeblink sample covers the whole eye status sequence of “eye openeye closeeye open”.
Figure 5: The eye appearance variation among the 172 different persons within HUST-LEBW dataset.

To reveal the “in the wild” characteristics, the eyeblink samples in HUST-LEBW are collected from 20 different commercial movies. Their main attribute information (i.e., name, filming location, style and premiere time) is listed in Table II. It can be observed that, the attributes of these movies are actually of high diversity. Essentially, this helps to ensure the eventful “in the wild” variation among the captured eyeblink samples in items of human attribute, human pose, scene / illumination condition, and imaging configuration as discussed in Table I. For instance, the employed 20 movies are shot in 8 countries from Asia, America, and Europe with the variational indoor and outdoor filming locations. Thus compared to the fixed indoor shooting condition of the existing eyeblink detection datasets [3, 14, 13, 15], acquiring eyeblink samples from these movies is of much stronger scene variation and challenges. Meanwhile, the discrepancy on movie style and premiere time also helps to promote the human attribute variation, which is more close to the practical applications. For example, the person races in HUST-LEBW include Asian, Caucasian and Melanoderm simultaneously. This actually cannot be met by the other datasets.

Iii-B Capture eyeblink in the wild sample

Figure 6: The eye appearance variation that corresponds to the change on illumination and imaging distance within HUST-LEBW dataset.
Figure 7: The statistical result of temporal duration that corresponds to the 381 raw captured eyeblink video clips within HUST-LEBW dataset.

From the 20 selected movies above, we then choose to capture the eyeblink in the wild samples in the form of video clip that covers the whole eye status sequence of “eye openeye closeeye open” as shown in Fig. 4. Finally, we acquire 381 eyeblink video clips as the positive samples. Meanwhile, 292 non-eyeblink samples are collected as the negative ones. As consequence, the yielded HUST-LEBW dataset consists of 673 samples in all (i.e., 381 positives, and 292 negatives).

Due to the high divergence of the employed movie data source, the captured eyeblink in the wild samples actually reveal dramatic variation on human attribute, human pose, scene condition, imaging view, and imaging distance as illustrated in Fig. 1 and Table II. These “in the wild” factors essentially impose great challenges to effective eyeblink detection. For example, 172 persons of variational human attributes and poses are involved in HUST-LEBW dataset. Their eye appearance is actually of striking discrepancy as shown in Fig. 5. Meanwhile, even within the same eyeblink sample the eye appearance may also be of dramatic variation due to the change on illumination and imaging distance as shown in Fig. 7. When concerning the variation of human attribute, human pose, scene and imaging condition simultaneously, accurately locating human eyes and characterizing the eye status for eyeblink detection in the wild is indeed not an easy task.

Since some existing eyeblink detection approaches (e.g., [21]) and our proposed LSTM-based manner require the input eyeblink video clips to be of the same length, we choose to polish the raw captured eyeblink samples to be of the fixed temporal size. To this end, statistics on temporal duration of the raw eyeblink samples is executed as shown in Fig. 7

. It can be observed that, the eyeblink temporal duration (frame) generally follows the Gaussian distribution with the mean value (

) of 6.18 and standard deviation (

) of 1.54. To alleviate the outlier effect caused by human labelling bias, we set the fixed temporal duration of eyeblink sample as 10 frames according to the Pauta criterion (i.e., 3

criterion) [29] also as revealed in Fig. 7. In particular, during the eyeblink sample polish phase we will place the fully-closed eye frame around the middle of the eyeblink sample. Then, if the raw eyeblink sample is less than 10 frames the first and last frame will be copied uniformly for extension iteratively. Oppositely, if the raw eyeblink sample is more than 10 frames the excess frames will be cut from the left and right hand uniformly. Meanwhile since some eyeblink detection approaches (e.g., [21]) require the input sample to be of 13 frames, we will also extend or cut the raw eyeblink samples to 13 frames to make HUST-LEBW dataset to be adapted to them.

Iii-C Eyeblink sample annotation work

(a) Face localization
(b) Eye localization
(c) Local left and right eye image extraction
Figure 8: The examples of eyeblink sample annotation work on face localization, eye localization, and local eye image extraction.

After acquiring the 673 eyeblink and non-eyeblink samples, we then execute annotation work on localizing face, localizing eye and extracting local eye images on each frame for performance evaluation towards practices. Next, we will introduce the annotation work in details.

Face localization. For each of the 8749 sample frames, we first use SeetaFace face parsing engine [16] to localize human face in terms of bounding box. Then, manual refinement is executed to ensure that the face bounding box can cover both of the right and left eye when they appear.

Eye localization. After face localization, we then manually localize the eye center at the point level frame by frame. If only one eye is visible, the coordinate of the invisible eye will be labelled as .

Local eye image extraction. Using the acquired face bounding box and eye center position information, the local eye images are consequently extracted as follows. For one person, if both of the left and right eye are visible with labelled centers the height and width of the local eye image are calculated as

(1)

and

(2)

where and indicate the position of left and right eye center; represents the computation of Manhattan distance [30] between and . Meanwhile, if only one eye is visible the height and width will be determined using the face size information, following the principle proposed in [31]. That is, the height and width of the local eye image are set as the of the face width. Some examples of eyeblink sample annotation are shown in Fig. 8.

It is worthy noting that, to ensure that the eyeblink sample annotation result is applicable to all the methods in experiments we will only localize the eyes and extract the local eye images visible for 13 frames. As consequence, we finally acquire 667 right eye samples and 644 left eye samples.

Iii-D Dataset split

After the HUST-LEBW dataset has been built, we then split it into the training and test set respectively. In particular, the training set consists of 448 samples. Among them, 254 samples are positives with 253 labelled right eyes and 243 labelled left eyes; 190 samples are negatives with 190 labelled right eyes and 181 labelled left eyes.

The test set consists of 225 samples. Among them, 127 samples are positives with 126 labelled right eyes and 122 labelled left eyes; 98 samples are negatives with 98 labelled right eyes and 98 labelled left eyes.

Figure 9: The main technical pipeline of the proposed eyeblink in the wild detection approach.

Iv Eyeblink in The Wild Detection Method : A Real-time Spatial-temporal Manner

As aforementioned, we formulate eyeblink detection task as a binary spatial-temporal pattern recognition problem. To solve it, eye localization is first executed at the spatial domain. Then, appearance and motion feature based on uniform LBP is simultaneously extracted per frame from the corresponding local eye images to characterize eyeblink. Multi-scale (MS) LSTM network able to handle multi-scale temporal information is consequently proposed to deal with the time series eyeblink characterization feature to address eyeblink verification. The main technical pipeline of the proposed eyeblink in the wild detection method is shown in Fig. 9. Next, we will illustrate it in details.

Figure 10: The main structure of LSTM unit.
Figure 11: The main structure of the proposed MS-LSTM model.
Figure 12: The visual comparison between eyeblink and non-eyeblink samples from the same person.
(a) Left eye
(b) Right eye
Figure 13: The feature distributions of the eyeblink and non-eyeblink samples within HUST-LEBW dataset, corresponding to the left and right eye respectively. They are drawn using t-SNE [32].

Iv-a Eyeblink verification using multi-scale LSTM

Eyeblink can be regarded as the human activity on face, consisting of the time series eye status of “eye openeye closeeye open”. Thus, eyeblink verification is essentially a binary time series pattern recognition problem to distinguish the eyeblink and non-eyeblink samples. Long Short-term Memory Network (LSTM) [33]

has been demonstrated to be one of the most successful deep learning models to deal with sequential data. It has already been applied to human body activity recognition 

[34] with promising performance. Inspired by this, we propose to apply LSTM to eyeblink verification.

LSTM is derived from Recurrent Neural Network (RNN) 

[35] to model the long-term dependency within time series data. As shown in 11, LSTM unit consists of a memory cell (), an input gate (), a forget gate (), and an output gate (). , , and work collaboratively to prevent memory contents from being perturbed by irrelevant inputs and outputs to ensure long-term memory storage in , in the way of controlling the information flow into and out of the LSTM unit. Meanwhile, the gradient vanishing and exploding problem met by RNN can also be alleviated in LSTM accordingly [33]. However, intuitively applying the original LSTM model to eyeblink verification is not optimal. The insight is that eyeblink actually happens with the different temporal duration as revealed in Fig. 7, although they have been manually fixed to the same size within HUST-LEBW dataset. Essentially, the raw LSTM model cannot deal with the multiple temporal case within time series data well [36]. To alleviate this, multi-scale LSTM (MS-LSTM) model is proposed by us from 2 perspectives as follows.

First instead of only using the output (i.e. the hidden state variable in Fig. 11

) of the last LSTM unit to be the input feature of softmax layer as for human body activity recognition 

[37, 38], we choose to employ the outputs of the last LSTM units jointly by concatenation to involve richer multiple temporal scale information for eyeblink characterization.

Secondly inspired by the conclusion drawn in [36] that the stacked RNN architecture can help to alleviate the multiple temporal scale problem, we transfer this idea to LSTM case by building stacked LSTM layers within MS-LSTM. Similar to stacked RNN [36], within the proposed MS-LSTM model the output of the previous LSTM layer will be employed as the input of the next LSTM layer in the parallel manner. Overall, the main structure of the proposed MS-LSTM model 111Within MS-LSTM, is empirically set to 2, and is set to 2. is shown in Fig. 11.

After the multiple temporal scale feature has been acquired within MS-LSTM, softmax layer will finally play the role of decision making to judge the type of input samples (eyeblink or non-eyeblink) as shown in Fig. 9. However, we argue that the original softmax loss [39] is not discriminative enough for eyeblink verification since it is essentially a fine-grained visual recognition problem. To reveal this, we show one eyeblink sample and one non-eyeblink sample from the same person in Fig. 12. It can be observed that, most of the frames within these 2 samples look similar except the eye close part. This phenomenon may lead to the fact that, the eyeblink and non-eyeblink samples are not easy to distinguish in feature space. To further verify this, we exhibit the distribution of the eyeblink and non-eyeblink samples within HUST-LEBW dataset in Fig. 13, using the appearance and motion feature illustrated in Sec. IV-B. We can see that both in the left and right eye cases the eyeblink and non-eyeblink samples distribute with serious overlap, which is difficult to well discriminate. To enhance the discriminative power towards eyeblink verification, we propose to use the angular softmax (A-Softmax) loss [39] with the promising performance for face verification. The intuition is that, face verification can also be regarded as a fine-grained visual recognition problem. Next, we will briefly introduce the key idea of A-Softmax loss.

For the binary pattern recognition problem of eyeblink verification, the decision boundary of the original softmax loss is defined as

(3)

where

indicates the input feature vector;

and represent the weights and bias. With the constrain of and , the decision boundary will be

(4)

where is the angle between and . As consequence, the new 2-class decision boundary is only related to . Actually, the modified softmax loss in Eqn. 4 enables the neural network to learn the angle-based decision boundary. However, it cannot ensure the strong discriminative power and generalization capacity. To alleviate this, A-Softmax loss introduces a integer to control angular margin between the 2 classes. Accordingly, the decision boundaries for the 2 classes are defined as

(5)

and

(6)

respectively. In summary, the essential idea of A-Softmax loss is to project the samples from Euclidean feature space to angular feature space and guarantee the angular margin between the 2 classes as shown in Fig. 14. In this way, the discriminative power and generalization capacity can be enhanced towards fine-grained eyeblink verification task. The detailed definition of A-Softmax loss can be found in [39].

(a) Original softmax loss
(b) A-softmax loss
Figure 14: The visual comparison betweenthe original softmax loss and A-softmax loss.

Iv-B Low-level appearance and motion feature extraction for eyeblink characterization

As aforementioned, eyeblink can be regarded as the human facial activity. Inspired by the two-stream (i.e., appearance stream and motion stream) human body activity recognition paradigm [40], we propose to extract low-level appearance and motion feature simultaneously per frame as the input of MS-LSTM for eyeblink characterization. Concerning the real-time running issue, we choose to achieve this goal based on the light-sheld uniform LBP visual descriptor [18]

instead of using the high-cost deep Convolutional Neural Network (CNN) 

[41] and optical flow [42] as in [40]. Another main reason for why we use uniform LBP is that it is rotation-insensitive [43], which is beneficial for eyeblink verification in the wild. As shown in Fig. 5, the eyeblink in the wild samples are of different rotation angles due to the variational human poses or imaging views as shown in Fig. 1.

Specifically, towards each eyeblink frame uniform LBP of is extracted from the local eye image as the appearance feature. Besides the appearance feature, we also propose to calculate the difference between the uniform LBPs from the consecutive 2 frames as the motion feature to reveal the eye status evolution during the phase of eyeblink. Intuitively, the appearance and motion feature is of the same dimensionality. They will be concatenated as the input of MS-LSTM for spatial-temporal eyeblink characterization, corresponding to each frame except the first one.

Iv-C Local eye image extraction

As illustrated in Fig. 8 and mentioned in Sec. IV-B, appearance and motion feature is extracted from the local eye images for eyeblink characterization. Thus, the effective and efficient local eye image extraction is crucial for real-time eyeblink detection. To this end, towards one eyeblink sample we choose to localize the center position of left eye () and right eye () using off-the-shelf SeetaFace face parsing engine [16] at the first frame. Then the local eye images are extracted using and according to Eqn. 1 and 2, which is the same as Sec. III-C. Regarding the remaining frames, the local eye images are acquired by tracking the yielded local eye regions of the last frame directly using KCF tracker [17] due to its high running efficiency. The main technical pipeline for local eye image extraction is shown in Fig. 15.

Figure 15: The main technical pipeline for local eye image extraction.

V Implementation details

In this section, the essential implementation details of the proposed eyeblink detection in the wild approach is illustrated as follows.

MS-LSTM is implemented based on the open source machine learning library TensorFlow 

[44];

During the training phase of MS-LSTM, ADAM [45] is used as the optimizer with the declining learning rate as shown in Table III. The parameters and in ADAM are set to 0.5 and 0.9 respectively;

SeetaFace face parsing engine is implemented using the public code with C/C++ programming language at https://github.com/seetaface/SeetaFaceEngine;

KCF tracker is implemented using the public code with C/C++ programming language at https://github.com/vojirt/kcftracker.

Uniform LBP is implemented by ourselves using C/C++ programming language.

     Learning step Learning rate
     1-100 0.01
     101-3000 0.001
     3001-30000 0.0001
     30000-50000 0.00001
Table III: The declining learning rate that corresponds to the learning step during MS-LSTM training.

Vi Experiments

During experiments to reveal the essential challenges of eyeblink detection in the wild and verify the effectiveness of our proposed eyeblink detection approach, we first compare the performance between our method and the other state-of-the-art eyeblink detection manners [21, 8, 10, 12, 13] on the proposed HUST-LEBW dataset in Sec. VI-A. Since the codes of the approaches employed for comparison are not publicly available and cannot be acquired from the authors, we try our best to implement them by ourselves.

Then to demonstrate the superiority of the proposed MS-LSTM based eyeblink verification approach, we compare it with the other state-of-the-art region-level eyeblink verification methods [10, 12, 13] in Sec. VI-B. To remove the impact of eye location for fair comparison, this test is executed under the assumption that the local eye region has already been successfully extracted in the way of using the manual annotation result directly as depicted in Sec. III-C. Since the approaches in [21, 8] cannot take the local eye image as input, they will not be taken into consideration for comparison in this experimental part.

Consequently, the performance comparison between our eye localization method and the other existing approaches [21, 9, 13, 46, 8] is carried out in Sec. VI-C. Here, 3 face parsing approaches (i.e., SeetaFace [16], Intraface [21], and MTCNN [28]) are also compared both from the perspectives of effectiveness and efficiency to justify the reason for why we choose SeetaFace to initially locate the eye center.

The real-time running capacity of our eyeblink detection approach is demonstrated in Sec. VI-D

. And, the ablation studies towards MS-LSTM, A-softmax loss function, and low-level eyeblink feature extraction within our method are executed in Sec. 

VI-E, Sec. VI-F and Sec. VI-G respectively to reveal the effectiveness of our propositions. The failure cases are given in Sec. VI-H.

The experiments run on a laptop with Intel(R) Core(TM) i7-7700HQ CPU @ 2.8GHz (only using one core) and 8 GB RAM memory, under the Windows 10 operation system. During the training phase of MS-LSTM, GPU is used for speed acceleration. But for online test, GPU will not be used.

  Method Eye idx score
  Soukupová [21] Left 0.5820 0.3607 0.6471 0.4632
Right 0.6825 0.3016 0.5758 0.3958
  Tabrizi [8] Leftright 0.7381 0.0714 0.4500 0.1233
  Chau [10] Left 0.9590 0.0164 1.0000 0.0323
Right 0.9524 0.0000 0.0000 0.0000
  Morris (ver.) [12] Left 0.9590 0.0164 0.6667 0.0320
Right 0.9603 0.0159 1.0000 0.0313
  Morris (hor.) [12] Left 0.9590 0.0410 0.7143 0.0775
Right 0.9603 0.0238 0.7500 0.0462
  Morris (flow) [12] Left 0.9590 0.0164 0.6667 0.0320
Right 0.9603 0.0159 0.5000 0.0308
  Drutarovsky [13] Left 0.7787 0.0574 0.4118 0.1007
Right 0.7857 0.0317 0.3077 0.0576
  Our method Left 0.3197 0.5410 0.8919 0.6735
Right 0.3413 0.4444 0.7671 0.5628
Table IV: Performance comparison among the different eyeblink detection methods on HUST-LEBW dataset. The best performance of each evaluation criteria is shown in boldface. In Tabrizi’s method [8], eyeblink detection is executed towards left and right eye jointly.

Vi-a Performance comparison among the different eyeblink detection methods

To evaluate the performance of the different eyeblink detection methods on HUST-LEBW dataset, the criterias of , and score are used as below.

(7)
(8)
(9)

where indicates the number of eyeblink samples recognized correctly;  222It is worthy noting that, the eyeblink samples with wrong eye localization result will be regarded as FNs. and denote the number of eyeblink and non-eyeblink samples recognized incorrectly.

Meanwhile, for eyeblink detection in the wild the failure of eye localization essentially weakens the performance. To reveal the impact of this issue, the failure rate () of eye localization towards eyeblink samples is given as

(10)

where indicates the number of eyeblink samples that correspond to the case that the eyes cannot be detected at all; denotes the number of eyeblink samples that correspond to the case that the eyes cannot be localized correctly within the all frames; and represents the number of eyeblink samples in all. The criteria for judging whether the eye has been correctly localized is given as

(11)

where is Manhattan distance function; and indicate the ground-truth position of left and right eye center; denotes the position of the detected eye center and represents its ground-truth position. If , we declare that the eye center has not been correctly localized. According to the evaluation criterias above, the comparison among the different eyeblink detection approaches on HUST-LEBW dataset is listed in Table IV. It can be observed that:

Actually, all the eyeblink detection approaches for test (including ours) cannot achieve the satisfactory performance. In summary, their scores cannot exceed 0.7 (0.6735 at most). This phenomenon reveals the fact that, eyeblink detection in the wild is not a trivial but indeed challenging visual recognition task not well solved yet;

Although the still not satisfactory result, the proposed eyeblink detection approach essentially outperforms the other methods significantly at 3 of the 4 evaluation criterias (besides ) both on left and right eye, from the perspectives of eye localization and eyeblink verification. That is, the performance gap between our manner and the others on score is 0.167 at least. This actually demonstrates the superiority of our proposition towards eyeblink detection in the wild. In some cases, the methods of Chau [10] and Morris (ver.) [12] can yield higher than ours. Unfortunately, they suffer from low mainly due to high ;

The challenges of eyeblink detection in the wild essentially derive from the procedures of eye localization and eyeblink verification simultaneously. In particular, all the methods suffers from high (over 0.3) on eye localization. Meanwhile, although our approach performs best its and is still relatively low.

  Method Eye idx score
  Chau [10] Left 0.1721 1.0000 0.2937
Right 0.2302 0.9656 0.3718
  Morris (ver.) [12] Left 0.5246 0.4741 0.4981
Right 0.5635 0.5064 0.5334
  Morris (hor.) [12] Left 0.6393 0.5342 0.5821
Right 0.5476 0.5107 0.5285
  Morris (flow) [12] Left 0.4918 0.4918 0.4918
Right 0.4286 0.4741 0.4502
  Drutarovsky [13] Left 0.1190 0.4757 0.1904
Right 0.0952 0.2860 0.1428
  Our method Left 0.7805 0.7385 0.7589
Right 0.8333 0.7778 0.8046
Table V: Performance comparison among the different eyeblink verification methods on HUST-LEBW dataset. The best performance of each evaluation criteria is shown in boldface.

Vi-B Performance comparison among the different eyeblink verification methods

Since the result of eyeblink detection is jointly determined by eye localization and eyeblink verification, to solely verify the superiority of our MS-LSTM based eyeblink verification approach the different methods are compared under the assumption that the local eye region has already been manually extracted in advance. Accordingly, the performance comparison among the different applicable approaches is listed in Table V. We can see that:

Removing the impact of eye localization, the proposed MS-LSTM based eyeblink verification approach still remarkably outperforms the other methods at score by large margins (0.1768 at least), both on left and right eye. This indeed demonstrates the superiority of our proposition over the other manners;

Even the local eye region has been manually extracted in advance, the performance of the involved approaches is still not promising enough. In particular, the highest score is only 0.8046. Actually this verifies the fact that eyeblink detection can be regarded as a fine-grained spatial-temporal visual pattern recognition problem of essential challenges, which is also revealed in Fig. 13 previously;

Our approach is inferior to Chau’s method [10] at . Nevertheless, its and score is much lower than ours.

Figure 16: The performance comparison among the different eye localization approaches used by the existing eyeblink detection manners.

Vi-C Performance comparison among the different eye localization methods

Eye localization is the vital step towards most of the eyeblink detection methods. It essentially affects the final performance a lot. Since the existing eyeblink detection approaches generally suffer from high failure rate () on eye localization as revealed in Table IV, we choose to compare our eye localization approach with the others (i.e., Intraface [21], OpencvFace+TM [9], OpencvFace+KLT [13], Skin [46] and Yuzhi [8]) mainly according to . The criteria for judging whether the eye has been localized correctly is the same as Sec. VI-A, according to in Eqn. 11. The experiments are executed on all the sample frames within HUST-LEBW dataset. The performance comparison among the different approaches is shown in Fig. 16. In particular, for compact comparison the average of left and right eye is reported. Obviously our eye localization approach that uses SeetaFace face parsing engine [16] and KCF tracker [17] is consistently better than the other manners remarkably, corresponding to the different thresholds.

On the other hand, within our approach SeetaFace face parsing engine plays the essential role of localizing eye center initially before tracking. To solely verify its superiority, we compare it with the other 2 state-of-the-art face parsing approaches (i.e., Intraface [21], and MTCNN [28]) from the perspective of effectiveness and efficiency simultaneously. In particular, the performance comparison on effectiveness among the 3 face parsing methods is shown in Fig. 17. We can see that, in most cases SeetaFace is better than Intraface but inferior to MTCNN. Nevertheless, towards real-time eyeblink detection application running efficiency should also be taken into consideration. We compare the average time consumption of these 3 approaches in Table VI. It can be observed that, SeetaFace is of the highest running efficiency (i.e., 26.13 ms per frame). Compared to MTCNN, it runs faster of 1 magnitude. Concerning the tradeoff between effectiveness and efficiency for real-time application, we choose SeetaFace as our initial eye localizer.

Figure 17: The performance comparison among 3 state-of-the-art face parsing methods for eye localization.
Method Time consumption
SeetaFace [16] 33.20
Intraface [21] 85.89
MTCNN [28] 503.07
Table VI: Average time consumption (ms) per frame among the different face parsing approaches for eye localization.

Vi-D Real-time online running capacity verification

In this subsection, we will verify that our proposed eyeblink detection method is of real-time online running capacity on a normal laptop with Intel(R) Core(TM) i7-7700HQ CPU @ 2.8GHz (only using one core). The average online running time consumption per frame of the main procedures within our method is listed in Table VII. It can be observed that, the main time consumption is costed by SeetaFace engine for initial eye localization with 33.20 ms. However, it will be executed only on the first frame towards an eyeblink sample. And, the procedures of eye tracking, eyeblink feature extraction, and eyeblink verification are extremely fast with the time consumption of only 7.87 ms in all. We can make a summary that, the initial eye localization procedure can run with the speed over 29 FPS. When turning to eye tracking phase, the proposed eyeblink detection method can run with the speed over 127 FPS. Overall, our approach meets the real-time running requirement (i.e., with the speed over 25 FPS).

Procedure Time consumption
Initial eye localization (SeetaFace) 33.20
Eye tracking (KCF) 6.06
Eyeblink feature extraction (uniform LBP) 0.32
Eyeblink verification (MS-LSTM) 1.49
Table VII: The average online running time consumption (ms) per frame of the main procedures within the proposed eyeblink detection approach.

Vi-E Ablation study 1: MS-LSTM

MS-LSTM is proposed by us to address the problem of eyeblink verification. From the network structure perspective, it holds 2 main modifications compared with the original LSTM model to alleviate the multiple temporal scale problem within eyeblink. One is to stack multiple LSTM layers. And, the other is to involve multiple temporal scale feature. Here, we will verify the effectiveness of the 2 modifications respectively. The experiments are executed under the assumption that the local eye region has already been manually extracted in advance, which is the same as Sec VI-B.

Eye idx Layer number score
Left 1 0.6098 0.8929 0.7246
2 0.7805 0.7385 0.7589
3 0.6992 0.8350 0.7611
4 0.7073 0.8056 0.7532
Right 1 0.7619 0.7934 0.7773
2 0.8333 0.7778 0.8046
3 0.7857 0.7984 0.7920
4 0.7629 0.8276 0.7934
Average 1 0.6859 0.8432 0.7510
2 0.8069 0.7582 0.7818
3 0.7425 0.8167 0.7766
4 0.7351 0.8166 0.7733
Table VIII: Performance comparison among MS-LSTMs with the different numbers of stacked LSTM layers.

Stack multiple LSTM layers. The number of the stacked LSTM layers is set from 1 to 4. The performance comparison among them is listed in Table VIII. It can be seen that:

Compared to the original LSTM model with only 1 layer, adding the layer number can consistently leverage the performance on and score in all the test cases. However, it may weaken . Overall, stacking multiple LSTM layers is an effective way to enhance eyeblink verification result comprehensively.

Setting the layer number to 2 can achieve the best average performance on and score. Accordingly, the layer number within the proposed MS-LSTM model is empirically set to 2 for eyeblink verification.

Eye idx Scale number score
Left 1 0.8455 0.7123 0.7732
2 0.7805 0.7385 0.7589
3 0.6585 0.9000 0.7606
4 0.6016 0.8916 0.7184
5 0.7480 0.7667 0.7572
Right 1 0.5952 0.8824 0.7109
2 0.8333 0.7778 0.8046
3 0.7460 0.7833 0.7642
4 0.7619 0.7742 0.7680
5 0.7302 0.7863 0.7572
Average 1 0.7204 0.7974 0.7421
2 0.8069 0.7582 0.7818
3 0.7023 0.8417 0.7624
4 0.6818 0.8329 0.7432
5 0.7391 0.7765 0.7572
Table IX: Performance comparison among MS-LSTMs with the different temporal scale numbers.

Multiple temporal scale feature. The temporal scale number is set from 1 to 5. The performance comparison among them is listed in Table IX. We can see that:

Involving multiple temporal scale feature essentially leverages the performance of eyeblink verification, especially from the perspectives of average , and score. This actually demonstrates the effectiveness of our proposition on extracting multiple temporal scale feature for eyeblink characterization within MS-LSTM model;

Setting the temporal scale number to 2 can achieve the best average performance on and score. Accordingly, the temporal scale number of the proposed MS-LSTM model is empirically set to 2 for eyeblink verification.

Vi-F Ablation study 2: A-softmax loss function

As revealed in Fig. 13, eyeblink verification can be regarded as a fine-grained binary spatial-temporal pattern recognition problem. To ensure the classification margin between eyeblink and non-eyeblink classes, A-softmax loss function is used within MS-LSTM model. To verify its superiority, we compare it with the original softmax loss function. The experiments are executed under the assumption that the local eye region has already been manually extracted in advance, which is the same as Sec VI-B. The performance comparison between these 2 loss functions is listed in Table X. It is impressive that A-softmax loss function consistently outperforms the original softmax loss function in all test cases, especially on the average and score. This indeed demonstrates the effectiveness of our proposition that applies A-softmax loss function to address eyeblink verification.

Vi-G Ablation study 3: low-level eyeblink feature extraction

To effectively characterize eyeblink, we propose to extract low-level appearance and motion feature simultaneously as the input of MS-LSTM using uniform LBP. To justify the superiority of our low-level eyeblink feature extraction method, we conduct experiments in 2 folders. First, uniform LBP is compared with the other 2 well-established visual descriptors (i.e., HOG [24] and Haar [23]). Meanwhile, the effectiveness of the mechanism on extracting appearance and motion feature simultaneously for eyeblink characterization is also verified. The experiments are executed under the assumption that the local eye region has already been manually extracted in advance, which is the same as Sec VI-B. The comprehensive performance comparison is listed in Table XI. It can be observed that:

Among the 3 visual descriptors for test, uniform LBP can achieve the best result on the average score. Its performance on the average and is also comparable to the best one. Overall, uniform LBP is the optimal choice for eyeblink detection;

For all the 3 visual descriptors the mechanism of extracting appearance and motion feature simultaneously can essentially leverage the performance in most cases, compared to using only one type feature.

In addition, the running time comparison of the 3 visual descriptors is listed in Table XII. We can see that, uniform LBP is of the fastest running speed.

The experimental results above indeed demonstrate the effectiveness of our proposed low-level eyeblink feature extraction approach.

  Eye idx Loss function score
  Left Softmax 0.7497 0.7304 0.7394
A-softmax 0.7805 0.7385 0.7589
  Right Softmax 0.6726 0.7581 0.7128
A-softmax 0.8333 0.7778 0.8046
  Average Softmax 0.7112 0.7443 0.7261
A-softmax 0.8069 0.7582 0.7818
Table X: Performance comparison between softmax and A-softmax loss function for eyeblink verification.
Descriptor Mechanism Left eye Right eye Average
score score score
Uniform LBP [18] App. 0.7398 0.7459 0.7429 0.7857 0.7444 0.7645 0.7628 0.7452 0.7537
Motion 0.7925 0.5250 0.6316 0.6667 0.6389 0.6525 0.7296 0.5820 0.6421
App.+motion 0.7805 0.7385 0.7589 0.8333 0.7778 0.8046 0.8069 0.7582 0.7818
HOG [24] App. 0.6911 0.5944 0.6391 0.8175 0.6242 0.7079 0.7543 0.6093 0.6735
Motion 0.7698 0.5834 0.6644 0.8182 0.5934 0.6879 0.7940 0.5884 0.6762
App.+motion 0.7398 0.7054 0.7222 0.8016 0.8347 0.8178 0.7707 0.7701 0.7700
Haar [23] App. 0.8115 0.5824 0.6781 0.6667 0.6512 0.6588 0.7391 0.6168 0.6685
Motion 0.6395 0.6763 0.6573 0.6561 0.7007 0.6776 0.6478 0.6885 0.6675
App.+motion 0.8130 0.5848 0.6803 0.8413 0.6463 0.7310 0.8272 0.6156 0.7057
Table XI: The performance comparison among the different visual descriptors under the appearance-motion eyeblink feature extraction mechanism. In particular, “app.” indicates appearance feature and “motion” denotes motion feature for eyeblink characterization.
Descriptor Time consumption
Uniform LBP [18] 0.322
HOG [24] 0.344
Haar [23] 0.650
Table XII: Average time consumption (ms) per frame among the different visual descriptors for eyeblink characterization.
(a) Face detection failure
(b) Initial eye localization failure
(c) Eye tracking failure
(d) False positive non-eyeblink sample
(e) False negative eyeblink sample
Figure 18: The failure cases towards eyeblink detection in the wild of our proposed approach. In particular, “” indicates the position of right eye and “” denotes the position of left eye.

Vi-H Failure cases of eyeblink detection in the wild

From Sec. VI-A to Sec. VI-G, quantitative performance evaluation is executed on our proposed eyeblink detection approach to demonstrate its effectiveness and superiority. Here, we will conduct the qualitative analysis to show the defects of our proposition towards in the wild application scenario. Accordingly the intuitive failure case examples are given in Fig. 18 from different perspectives, aiming to reveal some insights towards eyeblink detection in the wild and indicate the future research avenue. We can see that accurate face detection, eye localization and eye tracking is still remaining as the challenging visual tasks under the unconstrained “in the wild” conditions, although numerous of efforts have already been paid. The challenges actually derive from the dramatic variation on human attribute, human pose, illumination, and scene conditions. From Fig. 18(c)

, the fast movement of human is also a critical issue to impair eye tracking. Meanwhile, the makeup on eye may also confuse the classifier during the phase of eyeblink verification as shown in Fig. 

18(d). What is more challenging is that within some eyeblink samples the eyes are not fully closed as shown in Fig. 18(e), which may be caused by the relatively low frame rate of camera. These require us to extract more discriminative spatial-temporal feature for eyeblink characterization.

Vii Conclusions

In this work, we shed the light to the research field of eyeblink detection in the wild that has not been well studied before. Some essential practical and theoretical contributions have been addressed by us. First, a labelled dataset for eyeblink detection in the wild (HUST-LEBW) is built by us. And, it will be released online upon acceptance. Secondly, MS-LSTM model is proposed to address the fine-grained spatial-temporal pattern recognition problem within eyeblink detection. Thirdly, an effective and efficient eyeblink feature extraction approach able to capture appearance and motion information simultaneously is proposed. Meanwhile, our eyeblink detection method can run in real-time on a normal laptop without using GPU and parallel computing. The extensive experiments verify the challenges of eyeblink detection in the wild, and demonstrate the superiority of the proposed approach.

However the performance of our method is still not satisfactory enough, which is far from practical application. In future research work, we intend to resort to deep learning technology (e.g., CNN) to facilitate eye localization and eyeblink verification. Additionally, we also plan to extend HUST-LEBW dataset to meet the data-hungry requirement of deep learning.

Acknowledgment

This work is jointly supported by the National Key R&D Program of China (No. 2018YFB1004600), National Natural Science Foundation of China (Grant No. 61876211, 61702182, and 61602193), the International Science & Technology Cooperation Program of Hubei Province, China (Grant No. 2017AHB051), the HUST Interdisciplinary Innovation Team Foundation (Grant No. 2016JCTD120), Hunan Provincial Natural Science Foundation of China (Grant 2018JJ3254). Joey Tianyi Zhou is supported by Programmatic Grant No. A1687b0033 from the Singapore government’s Research, Innovation and Enterprise 2020 plan (Advanced Manufacturing and Engineering domain).

References

  • [1] B. S. Perelman, “Detecting deception via eyeblink frequency modulation,” Peerj, vol. 2, no. 2, p. e260, 2014.
  • [2] L. M. Bergasa, J. Nuevo, M. A. Sotelo, R. Barea, and M. E. Lopez, “Real-time system for monitoring driver vigilance,” IEEE Trans. on Intelligent Transportation Systems, vol. 7, no. 1, pp. 63–77, 2006.
  • [3] G. Pan, L. Sun, Z. Wu, and S. Lao, “Eyeblink-based anti-spoofing in face recognition from a generic webcamera,” in

    Proc. IEEE International Conference on Computer Vision (ICCV)

    , 2007, pp. 1–8.
  • [4] M. Rosenfield, “Computer vision syndrome: a review of ocular causes and potential treatments.” Ophthalmic & Physiological Optics, vol. 31, no. 5, pp. 502–515, 2011.
  • [5] Q. Ji and X. Yang, “Real-time eye, gaze, and face pose tracking for monitoring driver vigilance,” Real-Time Imaging, vol. 8, no. 5, pp. 357–377, 2014.
  • [6] J. D. Wu and T. R. Chen, “Development of a drowsiness warning system based on the fuzzy logic images analysis,” Expert Systems with Applications, vol. 34, no. 2, pp. 1556–1561, 2008.
  • [7] W. Dong, P. Qu, and J. Han, “Driver fatigue detection based on fuzzy fusion,” in Proc. Chinese Control and Decision Conference (CCDC), 2008, pp. 2640–2643.
  • [8] P. R. Tabrizi and R. A. Zoroofi, “Open/closed eye analysis for drowsiness detection,” in Proc. Image Processing Theory, Tools and Applications Workshop (IPTAW).   IEEE, 2008, pp. 1–7.
  • [9] A. Królak and P. Strumiłło, “Eye-blink detection system for human–computer interaction,” Universal Access in the Information Society, vol. 11, no. 4, pp. 409–419, 2012.
  • [10] M. Chau and M. Betke, “Real time eye tracking and blink detection with usb cameras,” Cas Computer Science Technical Reports, 2005.
  • [11] T. Hong and H. Qin, “Drivers drowsiness detection in embedded system,” in Proc. IEEE International Conference on Vehicular Electronics and Safety (ICVES), 2008, pp. 1–5.
  • [12] T. Morris, P. Blenkhorn, and F. Zaidi, “Blink detection for real-time eye tracking,” Journal of Network & Computer Applications, vol. 25, no. 2, pp. 129–143, 2002.
  • [13] T. Drutarovsky and A. Fogelton, “Eye blink detection using variance of motion vectors,” in Proc. European Conference on Computer Vision Workshop (ECCVW).   Springer, 2014, pp. 436–448.
  • [14] “Talking face video,” http://www-prima.inrialpes.fr/FGnet/data/01-TalkingFace/talking_face.html, Face&Gesture Recognition Working Group, IST-2000-26434.
  • [15] K. Radlak, M. Bozek, and B. Smolka, “Silesian deception database: Presentation and analysis,” in Proc. ACM Multimodal Deception Detection Workshop (MDDW), 2015, pp. 29–35.
  • [16] M. Kan, M. Kan, S. Shan, S. Shan, and X. Chen, “Funnel-structured cascade for multi-view face detection with alignment-awareness,” Neurocomputing, vol. 221, no. C, pp. 138–145, 2017.
  • [17] J. F. Henriques, C. Rui, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. on Pattern Analysis & Machine Intelligence, vol. 37, no. 3, pp. 583–596, 2014.
  • [18] T. Ahonen, A. Hadid, and M. Pietikainen, “Face description with local binary patterns: Application to face recognition,” IEEE Trans. on Pattern Analysis & Machine Intelligence, no. 12, pp. 2037–2041, 2006.
  • [19] W. O. Lee, E. C. Lee, and R. P. Kang, “Blink detection robust to various facial poses,” Journal of Neuroscience Methods, vol. 193, no. 2, p. 356, 2010.
  • [20] D. Torricelli, M. Goffredo, S. Conforto, and M. Schmid, “An adaptive blink detector to initialize and update a view-basedremote eye gaze tracking system in a natural scenario,” Pattern Recognition Letters, vol. 30, no. 12, pp. 1144–1150, 2009.
  • [21] T. Soukupová and J. Cech, “Real-time eye blink detection using facial landmarks,” in Proc. Computer Vision Winter Workshop (CVWW), 2016.
  • [22] R. Sun and Z. Ma, “Robust and efficient eye location and its state detection,” in Proc. International Symposium on Advances in Computation and Intelligence (ISACI), 2009, pp. 318–326.
  • [23] Z. Liu and H. Ai, “Automatic eye state recognition and closed-eye photo correction,” in Porc. International Conference on Pattern Recognition (ICPR), 2012, pp. 1–4.
  • [24] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005, pp. 886–893.
  • [25] H. Tan and Y. J. Zhang, “Detecting eye blink states by tracking iris and eyelids,” Pattern Recognition Letters, vol. 27, no. 6, pp. 667–675, 2006.
  • [26] G. Bradski and A. Kaehler, “Opencv,” Dr. Dobb’s Journal of Software Tools, vol. 3, 2000.
  • [27] X. Yin and X. Liu, “Multi-task convolutional neural network for face recognition,” CoRR, vol. abs/1702.04710, 2017. [Online]. Available: http://arxiv.org/abs/1702.04710
  • [28] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1499–1503, 2016.
  • [29] H. Cao, K. Zhou, X. Chen, and X. Zhang, “Early chatter detection in end milling based on multi-feature fusion and 3 criterion,” The International Journal of Advanced Manufacturing Technology, vol. 92, no. 9-12, pp. 4387–4397, 2017.
  • [30] S. M. Stigler, The history of statistics: The measurement of uncertainty before 1900.   Harvard University Press, 1986.
  • [31] Ö. Oguz, “The proportion of the face in younger adults using the thumb rule of leonardo da vinci,” Surgical and Radiologic Anatomy, vol. 18, no. 2, pp. 111–114, 1996.
  • [32] L. V. Der Maaten and G. E. Hinton, “Visualizing data using t-sne,” Journal of Machine Learning Research, vol. 9, pp. 2579–2605, 2008.
  • [33] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [34] J. Liu, G. Wang, P. Hu, L.-Y. Duan, and A. C. Kot, “Global context-aware attention lstm networks for 3d action recognition,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 7, 2017, p. 43.
  • [35] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. of the National Academy of Sciences (NAS), vol. 79, no. 8, pp. 2554–2558, 1982.
  • [36] M. Hermans and B. Schrauwen, “Training and analysing deep recurrent neural networks,” in Proc. Advances in Neural Information Processing Systems (NIPS), 2013, pp. 190–198.
  • [37] W. Zhu, C. Lan, J. Xing, W. Zeng, Y. Li, L. Shen, and X. Xie, “Co-occurrence feature learning for skeleton based action recognition using regularized deep lstm networks,” Proc. National Conference on Artificial Intelligence (AAAI), pp. 3697–3703, 2016.
  • [38] S. Zhang, X. Liu, and J. Xiao, “On geometric features for skeleton-based action recognition using multilayer lstm networks,” in Proc. IEEE Winter Conference on Applications of Computer Vision (WACV), 2017, pp. 148–157.
  • [39] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hypersphere embedding for face recognition,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, 2017, p. 1.
  • [40] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in Proc. Advances in Neural Information Processing Systems (NIPS), 2014, pp. 568–576.
  • [41] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, p. 436, 2015.
  • [42] T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 33, no. 3, pp. 500–513, 2011.
  • [43] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002.
  • [44] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., “Tensorflow: a system for large-scale machine learning.” in OSDI, vol. 16, 2016, pp. 265–283.
  • [45] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [46] Z. Tian and H. Qin, “Real-time driver’s eye state detection,” in Proc. IEEE International Conference on Vehicular Electronics and Safety (ICVES), 2005, pp. 285–289.