Deceptive behavior is frequently displayed in daily life, yet, recognition of such behavior or lies is not an easy task for humans. On average, people are able to correctly classify onlyof lies and of truthful statements . Therefore, reliable methods for deception detection is an important need specifically for high-stakes situations such as court cases, and suspect/witness interrogations for further investigation. However, the ubiquitous polygraph, the most commonly known lie detection mechanism, has been shown to be unreliable .
Invasive approaches such as PET (positron emission tomography) and fMRI (function magnetic resonance imaging) based methods perform better but they are neither fully reliable nor practical in many situations where compactness or portability is required. Besides, the invasive nature of such mechanisms leaves them to be easily tricked by skilled deceivers . Hence, deception detection requires non-invasive and covert methods for accurate detection. The difficulty in non-invasive deception detection lies in the weakness of external cues, since a large volume of work indicates that lies are barely evident in behaviour .
Recent developments in computer vision, along with the availability of deceptive behavior videos, have increased the research interest on deceit detection from visual patterns. The driving mechanism behind this ambition is the (subconscious) leakage of behavioral cues to deception. These cues are often weak, very fast or subjective, making them hard to interpret by humans. Recent studies on automated deception detection  exploits different behavioral modalities such as facial actions/expressions, head pose/movement, gaze, hand gestures, and even vocal features in the analysis [23, 2]. In contrast, our work focuses solely on facial cues (including head pose/movement), yet providing a better accuracy.
High-level visual features used in the literature 
such as facial action units, are prone to errors due to challenging environmental conditions (i.e. illumination, view point, occlusion etc.). Thus, such features can introduce significant amount of noise in the analysis. In this paper, to cope with such issues, we propose to exploit face reconstruction to obtain an effective low level representation for a more reliable deceit detection. Face reconstruction aims at decomposing a face image into its components such as 3D facial geometry, expression, skin reflectance, head pose, and illumination parameters, which are expected to carry important information for deceit detection. While the illumination parameters sound like unrelated to be used in this task, in combination with geometry it reveals subtle changes in expression-related skin deformations. Furthermore, prediction of these parameters, in our method, are constrained by an image formation model that relies on joint parametric modeling of facial cues, head pose, and illumination. Therefore, it minimizes the possible negative influences of varying environmental conditions. Once such components are extracted, they are fed to a Recurrent Neural Network to model temporal characteristics of deceptive and honest behavior in videos.
Although, a successful decomposition has been a backbone for many face-related computer vision tasks (e.g. face recognition, emotional expression recognition, head pose estimation, etc.), this work is the first one that exploits face reconstruction for deceit detection. Furthermore, we propose a fully unsupervised end-to-end deep architecture for face reconstruction (including 3D facial geometry, expression, reflectance, head pose and illumination) from videos. Our results show that the proposed novel method for deception detection improves the state of the art, as well as outperforming the use of manually annotated facial attributes (e.g. facial actions/expressions, gaze, and head movement) for this task.
2 Related Work
2.1 Deception Detection
At the basis of deception detection through nonverbal cues stands the leakage hypothesis, which states that –if the stakes of a lie are high enough– involuntary, subconscious cues of deceit will emerge from a liar . One can divide observable cues in physiological cues, body language cues and facial cues. One of the problems about intangible constructs such as deceit is that these cues range from highly objective ones (vocal pitch) to highly subjective measurements (facial pleasantness). Hence, this section aims to provide an overview of objective, non-verbal cues that are relevant to the scope of using visual features for deception detection.
Concerning facial cues, a multitude of signals have been identified to correlate with deceit, such as lip pressing , smiling and pupil dilation and facial rigidity . However, the studies often find contradictory results . In addition, performance is highly dependent on the data used for training and validation, with some datasets being noticeably easier than others . Secondly, the circumstances under which the lies were elicited are influential: multiple studies indicate that deceptive cues increase in magnitude with increased cognitive load . Hence, the final application and training data should have comparable cognitive load during data recording.
Micro-expressions pose another viable source of information , even though other studies have shown that only a small amount of people exhibit micro-expressions when lying . Facial action units (AUs) are also found to be informative for deceit detection .
One of the most recent methods on automated deceit detection is proposed by Morales et al. . This method fuses information from audio-visual modalities, where visual features in the form of 408 cues, including gaze, orientation and FACS information, are extracted using OpenFace 
and later fused with verbal and acoustic features. Fusion occurs through concatenation of statistical functional vectors, after which random forests and decision trees are used for deception classification. Differently, presents a baseline method for their introduced Real-Life Trial dataset, which models manually coded visual features such as expression, head movement, and hand gestures together with speech transcriptions using random forests and decision trees.
2.2 Monocular Face Reconstruction
Decomposition of image components requires inverting the complex real-world image formation process. The reconstruction by inverting image formation is an ill-posed problem because infinite number of combinations can produce the same 2D image . In general, we can categorize face reconstruction methods into two groups, namely, iterative [4, 27, 14, 28]
and deep learning based. Iterative approaches try to optimize parameters by minimizing the error between projected (reconstructed face) and original image in an iterative (analysis-by-synthesis) manner . The energy functions are mostly non-convex. Good fitting can only be obtained by close initialization to the global optimum, which is only possible with some level of control during image capture. Since these approaches are computationally expensive therefore are not preferred in this paper.
Deep learning based methods to reconstruct face from a single monocular image typically uses either data augmentation techniques to regress prediction to be close to the ground truth [18, 15] or applies the similar analysis-by-synthesis approach to train the neural network using a physically plausible image formation model [26, 15]. These methods produce sufficient reconstruction quality for certain tasks, however, they sacrifice details in order to be tractable for challenging, unconstrained images. Since such methods cannot avoid expression information to be leaked in 3D facial geometry, it is likely that there is an information loss while capturing expression. To reliably capture facial movements, the separation of 3D facial geometry and expression components are quite important.
. However, these works are based on iterative approach. Convolutional Neural Network (CNN) architectures are rarely explored for video-based dense real-time face reconstruction. In this paper, we present a novel identity-aware, dense and real-time face reconstruction CNN pipeline which receives RGB videos as input. Unlike previous monocular reconstruction methods, our method extracts identity related parameters (i.e. 3D facial geometry and reflectance) for a full video sequence whereas temporally dependent parameters (i.e. expression and illumination) for every single frame. The proposed method prohibits leakages of expression parameters to 3D facial geometry by temporal constraints which improves the preciseness of facial expression capture. Furthermore, using a Recurrent Neural Network (RNN), we temporally constrain the expression so that we preserve the consistency between expression through full video.
3.1 Network architecture
Convolutional Neural Network is used to predict intrinsic inverse rendering parameters (code vector) from a set of RGB face images , from which a reconstructed image can be recovered:
where and are parameters corresponded to 3D face geometry, reflectance and expression; represents scene illumination parameters; and represent rotation and translation parameters.
Figure 1 shows an overview of our face reconstruction architecture. Our model consists of two AlexNet  backbones with shared weights, one (Identity CNN) to extract person identity 3D facial geometry and reflectance features related to from a collection of images and another (Framewise CNN) to extract frame-dependent facial features from particular frame . For our purpose we are using all layers of AlexNet except the last FC8 layer. Those features are fused using recurrent units with 100 hidden parameters and fully connected layers without non-linearity to predict single set of identity parameters , given a set of cropped face .., expression parameters conditioned on the previous temporal state, illumination, rotation and translation parameters.
Identity CNN is followed by recurrent unit of 100 hidden parameters and fully connected layer without non-linearity to produce identity parameters , . Features from a recurrent unit is concatenated with Framewise CNN. This representation is followed by another recurrent unit of 100 hidden parameters and fully connected layer to produce blend shape parameters, and just fully connected layer without recurrent unit for other parameters.
3.2 Physics-based image formation
3D facial geometry and reflectance. We parametrize 3D face geometry using a multi-linear PCA model  separately for neutral face and face expression (Eq. 2). 3D face geometry is represented as a point cloud in the Euclidean space.
where we denote , as an average neutral face and an average expression geometries, ,
as their principal components sorted by standard deviations, respectively. Face reflectance is modelled using a separate PCA model:
where is an average face reflectance, are principal components sorted by standard deviations .
Face transformation. We model face movement in the scene using 6DoF transformation . Rotation matrix is represented in , and translation is separate in each x, y, z directions.
Illumination model. Illumination changes are modelled using first 3 bands of spherical harmonics basis function assuming face is a Lambertian surface . Intensity of the i-th vertex is defined as a product of vertex reflectance and a shading components.
where is a vertex normal of the i-th vertex. We define illumination parameters separately for each R, G, B channels and thefore have 27 parameters in total. Vertex normal is estimated based on 1-ring triangle neighbours. Triangle topology is known from the face morphable model.
Projection model. An obtained 3D point cloud is mapped into a 2D plane by applying a rigid transformation and perspective transformation which is modelled as product of projection and viewport matrices:
coordinates and depth can be obtained by division by the homogeneous coordinate . Focal length is assumed to be fixed and principal points to be in the middle of the projection screen.
3.3 Training loss
Landmark loss. difference between landmark projections from a predicted 3D face model and ground truth landmark are used. In total, we use landmarks for optimization covering eyebrows, eye corners, nose, mouth and chin regions.
where we define as an annotated vertex index of the j-th landmark on the 3D model.
Vertex-wise photometric loss. We define photometric loss as a difference 
between vertex intensity color and its corresponded color from the original image. To find an intensity color on image space we perform bilinear interpolation.
We filter out vertices which contributes to the photometric loss based on normal direction, is an amount of those vertices.
Statistical regularization. We use Tikhonov regularization  to enforce predicted parameters to be in the plausible range.
3.4 Modeling Deceptive Behavior
Once the facial representation is obtained, we classify videos as deceptive or honest. We employ recurrent neural network (RNN) to capture temporal relations between facial representation vectors of frames
for each video. We use the loss function
where is the label of the video i, and
is the predicted probability of video i for class j. Deceptive and honest behaviors correspond to 1 and 0, respectively.
4 Implementation details
We train our 2D-to-3D face reconstruction network for 200K iterations on 300VW  and CelebA datasets  using a batch size of 5 and Adam optimizer  with learning rate of . Loss weights are set to be , , , , .
300VW contains video sequences with annotated 68 landmarks for each frame. We crop faces based on a bounding box on ground truth landmarks with 10% expansion. We process CelebA using dlib  for face detection and FAN  for landmark detection. In total we have collected 94K images from 300VW coming from 49 videos and 200K images from CelebA.
For each video sequence of 300VW we randomly select 3 crop faces in random order as an input for the Identity-CNN. We randomly sample a sequence of 3 crop faces with a random step size from 1 to 5 frames as an input for the Framewise-CNN. For CelebA we assume that we have a 1-frame video sequence for each image. Images are randomly flipped to augment the dataset size. We train the model alternating CelebA and 300VW batches.
AlexNet backbones are initialized using a pretrained model on ImageNet. We add additional offset to the 0-th band SH coefficient and z-translation to make sure initial 3D face model has a plausible initial illumination and is centered in the middle of the screen.
Basel Face Model 2017  is used for 3D face geometry, reflectance and expression. We take first 80 principal components for and and 64 for . We implement gradients in the compact form based on the work of Gallego et al. 
. Our implementation is written in Tensorflow. We ran our experiments on NVIDIA GTX 1080.
In this study, we employ the Real-Life Trial dataset  which contains 121 videos from real-life high-stakes scenarios that are publicly available. See Fig. 2 for visual samples from dataset. It has 61 deceptive and 60 truthful trial clips of 21 female and 35 male subjects whose ages vary between 16 and 60. The average duration of videos is about 28 seconds. When constructing the dataset, Perez-Rosas et al.  enforce some visual constrains for videos such as the defendant or witness and his or her face should be clearly identified during most of the footage as well as some vocal enforcements which are not relevant within our context. We discard 40 of these videos from the dataset due to technical errors: 1) failure in facial landmark detection using  2) some videos do not display the target subject and instead show something else such as courtroom while having the voice of target subject. Thereby, a subset of 81 videos (39 deceptive, 42 truthful) from this dataset is used which is constructed from 28 male and 53 female subjects.
In this section, we explain all the experiments that are conducted in detail. We considered lie as positive and truth as negative
throughout the experiments when calculating accuracy, precision and recall.
6.1 Comparison with monocular 2D-to-3D methods
Our pipeline constraints prediction of identity parameters , by making use of randomly selected multiple frames. In this experiment, we evaluate how sensitive the proposed face reconstruction is to the choice of frames for identity estimation. We compare our network accepting random single monocular image against to the work of Tewari et al. which does not perform any conditioning on identity. The evaluation is performed on the 300VW validation split. It contains 3 video subsets with different scene complexity. Set 1 contains 31 videos, set 2 contains 19 videos with difficult illumination conditions, set 3 contains 14 videos with occlusion, extreme pose and expressions. Results are reported in the Table 1. Results have shown that our method produces more consistent predictions for albedo and shape parameters in comparison to Tewari et al. This shows that the proposed method avoid leakages between expression and geometry. Consequently, proposed method predicts more precise expression and geometry.
|Set 1||Set 2||Set 3|
|Tewari et al.  ( std)||0.240||0.191||0.299|
|Tewari et al.  ( std)||0.299||0.159||0.174|
|Ours ( std)||0.051||0.052||0.064|
|Ours ( std)||0.065||0.052||0.082|
6.2 Face Reconstruction Visual Results
We show visuals of our reconstruction pipeline in the Figure 3. Our method successfully recover intrinsic properties of a face such as shading, normals and color intensity preserving facial identity over video frames.
6.3 Gender Effect
In this experiment, we investigate the effect of gender to our results. The dataset does not provide gender annotation, therefore we annotate the dataset with gender information. Then dataset samples are grouped based on their gender to analyze results for each gender. The results are summarized in Table 2. High precision and recall values of females may suggest that feature extraction of females is more challenging and has high variation. However, this can also be related to the number of samples as we have female subjects almost as twice as males subjects.
|Morales et al.  (DT)*||OpenFace features||0.55||0.54||0.50|
|Morales et al.  (RF)*||OpenFace features||0.50||0.45||0.25|
|Perez-Rosas et al.  (DT)*||Hand-labeled features||0.66||0.67||0.55|
|Perez-Rosas et al.  (RF)*||Hand-labeled features||0.67||0.70||0.55|
*: only facial features are used
6.4 Comparison to Other Models
First, we start our comparisons with reproducing the baseline models. The model of Morales et al.  is tested with decision tree (DT) and random forest (RF) classifiers with default parameters as mentioned in their papers. Morales et al. use OpenFace 
to extract facial features in default output (i.e. basics, gaze, pose, 2D and 3D facial landmark locations, rigid and non-rigid shape parameters, action units) and apply some statistical functionals (i.e. maximum, minimum, mean, median, standard deviation, variation, kurtosis, skewness, 25% percentile, 50% percentile, and 75% percentile) in order to create one feature vector per video.
The model of Perez-Rosas et al. , which is the basis for Morales et al. , is also implemented with decision tree (DT) and random forest (RF) classifiers with default parameters as mentioned in their papers. They use manually crafted features (i.e. smile, laughter, scowl, gaze, lips, openness and closeness of eyes and mouth, position of eyebrows like frowning and raising, head movements, hand trajectory and movements). Thus, the accuracy results of their work indeed show the performance of human annotators. Note that, since our system focuses only on facial features, we excluded hand-related features from their experiment setup to obtain comparable results.
Morales et al.  mention 71.07% and 73.55% accuracy results for their visual model with DT and RF classifiers, respectively. However, they obtain these figures erroneously by applying leave-one-out cross-validation which causes to subject overlaps between test and train dataset. Thereby, in our experiments for both  and , we applied leave-one-person-out (LOPO) cross-validation (where videos of a single subject are separated as test set and from the remaining videos five of them are randomly sampled as validation set and the remaining videos are taken as training set for each test fold) to reproduce their corrected accuracy results, given in Table 3. Note that, in order to have balanced training dataset at the end, we randomly downsampled majority class in terms of quantity to have equal number of instances from each class. In Table 3, each DT and RF result is obtained by taking the average of 20 iterations.
As our third baseline, we experiment with convolutional neural network for time series classification (Time-CNN) model , as shown in Table 3. This method reveals time series patterns through 1D convolutions on temporal vector of each individual feature dimension. This model emphasizes the strength of our proposed deception modeling RNN as it constructs a strong baseline that uses the same features (i.e. face reconstruction components (FRC) that our proposed CNN face reconstruction network extracts) with the proposed RNN.
Last row of Table 3, recurrent neural network (RNN) model, shows the performance of our proposed deception detection method.
The results are summarized in Table 3. The results show that the proposed RNN method improves the state of the art with 4%. In addition, we also improve (6%) , the method which uses manually annotated features. There might be leakage of behavioural cues to deception. This leakage may not be an obvious behaviour which may not necessarily annotated by the human observer. This may indicate that the proposed method captures even subtle behaviours for deceit detection.
We have presented a novel method for deception detection based on reliable low- and high-level facial features obtained using 2D-to-3D face reconstruction technique. To be able to reconstruct faces (including 3D facial geometry, expression, reflectance, head pose and illumination) from videos, we propose a fully unsupervised end-to-end deep architecture. We show our method to produce consistent identity prediction in contrast to other deep learning methods which take only one monocular frame during the testing time. Our pipeline uses recurrent neural networks to learn temporal behaviour, works real time and shows state-of-the-art accuracy in the challenging Real-Life Trial (RLT) dataset. Our results show that the proposed method (with an accuracy of 72.8%) improves the state of the art as well as outperforming the use of manually coded facial attributes (67.6%) in deception detection.
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado,
A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving,
M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg,
D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens,
B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan,
F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and
TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.Software available from tensorflow.org.
-  M. Abouelenien, V. Pérez-Rosas, R. Mihalcea, and M. Burzo. Deception detection using a multimodal approach. In Proceedings of the 16th International Conference on Multimodal Interaction, pages 58–65. ACM, 2014.
-  T. Baltrušaitis, P. Robinson, and L.-P. Morency. Openface: an open source facial behavior analysis toolkit. In Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pages 1–10. IEEE, 2016.
-  V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 187–194. ACM Press/Addison-Wesley Publishing Co., 1999.
-  C. F. Bond Jr and B. M. DePaulo. Accuracy of deception judgments. Personality and social psychology Review, 10(3):214–234, 2006.
-  H. Bouma, G. Burghouts, R. den Hollander, S. Van Der Zee, J. Baan, J.-M. ten Hove, S. van Diepen, P. van den Haak, and J. van Rest. Measuring cues for stand-off deception detection based on full-body nonverbal features in body-worn cameras. In Optics and Photonics for Counterterrorism, Crime Fighting, and Defence XII, volume 9995, page 99950N. International Society for Optics and Photonics, 2016.
-  A. Bulat and G. Tzimiropoulos. How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks). In International Conference on Computer Vision, 2017.
-  J. K. Burgoon, N. Magnenat-Thalmann, M. Pantic, and A. Vinciarelli. Social signal processing. Cambridge University Press, 2017.
-  G. G. Chrysos, E. Antonakos, P. Snape, A. Asthana, and S. Zafeiriou. A comprehensive performance evaluation of deformable face tracking ”in-the-wild”. International Journal of Computer Vision, 126(2-4):198–232, 2018.
-  N. M. L. DesJardins and S. D. Hodges. Reading between the lies: Empathic accuracy and deception detection. Social Psychological and Personality Science, 6(7):781–787, 2015.
C. Ding, D. Zhou, X. He, and H. Zha.
R1-pca: Rotational invariant l1-norm principal component analysis for robust subspace factorization.In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pages 281–288, New York, NY, USA, 2006. ACM.
-  K. Fiedler, J. Schmid, and T. Stahl. What is the current truth about polygraph lie detection? Basic and Applied Social Psychology, 24(4):313–324, 2002.
-  G. Gallego and A. J. Yezzi. A compact formula for the derivative of a 3-d rotation in exponential coordinates. CoRR, abs/1312.0788, 2013.
-  P. Garrido, L. Valgaerts, C. Wu, and C. Theobalt. Reconstructing detailed dynamic face geometry from monocular video. ACM Trans. Graph., 32(6):158–1, 2013.
K. Genova, F. Cole, A. Maschinot, A. Sarna, D. Vlasic, and W. T. Freeman.
Unsupervised training for 3d morphable model regression.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
-  T. Gerig, A. Forster, C. Blumer, B. Egger, M. Lüthi, S. Schönborn, and T. Vetter. Morphable face models - an open framework. CoRR, abs/1709.08398, 2017.
-  M. Hartwig and C. F. Bond Jr. Lie detection from multiple cues: A meta-analysis. Applied Cognitive Psychology, 28(5):661–676, 2014.
-  H. Kim, M. Zollhöfer, A. Tewari, J. Thies, C. Richardt, and C. Theobalt. InverseFaceNet: Deep monocular inverse face rendering. In Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
-  D. E. King. Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10:1755–1758, 2009.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA, 2012. Curran Associates Inc.
-  Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), Dec. 2015.
-  M. R. Morales, S. Scherer, and R. Levitan. Openmm: An open-source multimodal feature extraction tool. In Proc. Interspeech 2017, pages 3354–3358, 2017.
-  S. J. Pentland, N. W. Twyman, J. K. Burgoon, J. F. Nunamaker Jr, and C. B. Diller. A video-based screening system for automated risk assessment using nuanced facial features. Journal of Management Information Systems, 34(4):970–993, 2017.
-  V. Pérez-Rosas, M. Abouelenien, R. Mihalcea, and M. Burzo. Deception detection using real-life trial data. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI ’15, pages 59–66, New York, NY, USA, 2015. ACM.
A. Tewari, M. Zollöfer, H. Kim, P. Garrido, F. Bernard, P. Perez, and
MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction.In The IEEE International Conference on Computer Vision (ICCV), 2017.
-  J. Thies, M. Zollhöfer, M. Nießner, L. Valgaerts, M. Stamminger, and C. Theobalt. Real-time expression transfer for facial reenactment. ACM Trans. Graph., 34(6):183–1, 2015.
-  J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2387–2395, 2016.
-  J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2016.
-  A. Vrij, R. P. Fisher, and H. Blank. A cognitive approach to lie detection: A meta-analysis. Legal and Criminological Psychology, 22(1):1–21, 2017.
-  P. Wu, H. Liu, C. Xu, Y. Gao, Z. Li, and X. Zhang. How do you smile? towards a comprehensive smile analysis system. Neurocomputing, 235:245–254, 2017.
-  W.-J. Yan and Y.-H. Chen. Measuring dynamic micro-expressions via feature extraction methods. Journal of Computational Science, 25:318–326, 2018.
-  B. Zhao, H. Lu, S. Chen, J. Liu, and D. Wu. Convolutional neural networks for time series classification. Journal of Systems Engineering and Electronics, 28(1):162–169, 2017.