Humans can readily and accurately estimate the 3D shape of an object from a set of 2D landmark points on a single image. Many computer vision approaches have been developed over the years in an attempt to replicate this outstanding ability. As an example, the approaches in[1, 2, 3, 4, 5] provide 3D shape estimates of a set of 2D landmark points on a single image. Other works, such as structure from motion (SfM) and shape from shading, require a sequence of images [6, 7, 8], and although they have been extensively studied, the results are not as accurate as the ones reported in the present paper.
The above mentioned algorithms learn a set of 3D shape bases, with the assumption that the deformation in a test image can be represented as a linear combination of these bases. This linear assumption limits the applicability of these approaches on highly deformable or articulated shapes, a point that is only exacerbated when the number of 2D landmark points is small (i.e., non-dense fiducials). To resolve these problems, some previous works, e.g., [2, 3], are specifically designed to recover the 3D shape of a specific object by introducing domain-specific priors into the 3D estimate. However, the addition of these priors typically limit the applicability of the resulting algorithms, e.g., to improve generic object recognition [3, 9] or estimate 3D pose in virtual reality [10, 2]. Another solution is to identify the set of picewise segments that belong to the same surface and then solve for each section separately [11, 12]. However, these algorithms require multiple images or video sequences to identify consistency and smoothness in the movement.
Estimating the 3D geometry of an object from a single view is an ill-posed problem. Nevertheless, with the available 3D ground-truth of a number of 2D sample images with corresponding 2D landmarks, we can learn the mapping from 2D to 3D landmark points, i.e., we can define a mapping function that given a set of 2D landmark points outputs their corresponding 3D coordinates , . There are three main challenges in this task: (1) how to define a general algorithm that aims at a wide range of rigid and non-rigid objects; (2) how to define an algorithm that yields good results whether we have a small or a large number of samples; (3) how to make this process run in real-time ( frames/s).
In order to deal with the aforementioned challenges, we propose a deep-network framework to estimate the function . The proposed model is illustrated in Figure 1.
Unlike most existing methods, our work derives a deep neural network that uses a set of hierarchical nonlinear transformations to recover depth information of a number of known 2D landmark points on a single image. The 3D shape is estimated by combining the input and output of the neural network and is independent of any scaling factor. The algorithm can be efficiently trained regardless of the number of samples and is robust to noise and missing data. This derived deep neural network is efficient, outperforming previous algorithms by a significant margin.
The major contributions of the herein derived algorithm can be summarized as follows.
Our algorithm is extremely general and can be applied to recover the 3D shape of basically any rigid or non-rigid object for which a small or large number of training samples (i.e., 2D and corresponding 3D landmark points) is available. As examples, we provide results on human faces, cars, human bodies and flags.
Our algorithm is not limited by the number of training samples. When the number of training data is very large, we employ mini-batch training to parallelize gradient-based optimization and reduce computational cost. When the training set is small, we used an innovative data augmentation approach that can generate many novel samples of the 2D landmark points of a given set of 3D points using several camera models; this yields new 2D landmark points as viewed from multiple cameras, scales, points of view, etc.
Our proposed multilayer neural network can be trained very quickly. Additionally, in testing, our algorithm runs much faster than real-time ( frames/s on an i7).
2 Related Work
Estimating the 3D geometry of an object from a single view is an ill-posed problem. There has been substantial work on detecting 2D landmark points [13, 14, 15] from a single image, but how about 3D shape? Recently, a number of databases including accurate 3D and 2D landmark points of different objects have allowed the learning of mapping functions between 2D and 3D. Example databases are the Google 3D Warehouse , the Carnegie Mellon Motion capture set , the Fine-Grained 3D Car database (FG3DCar) , and the PASCAL3D database [17, 18].
Directly related to our work are several approaches that reconstruct 3D shape from a single image [1, 2, 3, 19, 4]. Previous methods usually fit a shape-space model to recover the 3D configuration of the object from its 2D locations of landmarks in a single image [2, 20, 1, 21]. In , the authors address the human pose recovery problem by presenting an activity-independent method, given 2D locations of anatomical landmarks. Unfortunately, this method is limited to the reconstruction of human bodies. Lin et al.  derived an algorithm to recover 3D models of cars. In the work of , an optimization method for location analysis is proposed to map the discriminant parts of the object into a 3D geometric representation. In , a shape-based model is designed, which represents a 3D shape as a linear combination of shape bases. In 
, the authors extend this previous work to deal with outliers in the 2D correspondences and rotation asynchronization. More broadly, but also related to our work, reconstructing 3D shape from a sequence of 2D images using structure from motion (SfM) has been extensively investigated. In particular, the work of  can recover the 3D shape of an objects from 2D landmark points on a single image using SfM-trained models.
These works assume a low-dimensional embedding of the underlying shape space, and represent each 3D shape as a linear combination of shape bases. For example, Principal Component Analysis (PCA) and other linear methods are employed to obtain the low-dimensional shape bases[23, 24]. A major limitation of linear models is that when the true (but unknown) mapping function is nonlinear, the accuracy drops significantly . Therefore, these methods cannot efficiently handle highly deformable or articulated shapes.
Additionally, several algorithms are limited to a certain type of object categories, e.g.  is designed for reconstructing 3D human pose from 2D image landmarks and  focuses on 3D car modeling. These algorithms thus make prior assumptions or use special geometric properties of the object to constrain the solution space that do not typically generalize well to other object categories.
An additional limitation of the above cited papers is their inability to learn from either large or small datasets. Some existing algorithms require very small training sets, but are unable to yield improvements when larger datasets are available. On the other hand, some algorithms require very large training sets and are incapable of yielding reasonable results when the number of samples is small.
Our theoretical and experimental results reported below demonstrate how the herein derived algorithm resolves the limitations of the methods discussed in this section.
3 Proposed Approach
In this section, we describe how to recover the 3D shape of an object given its 2D landmarks on a single image.
Let us denote the 2D landmark points on the image
where are the 2D coordinates of the image landmark point. Our goal is to recover the 3D coordinates of these 2D landmark points,
where are the 3D coordinates of the landmarks of the object.
Assuming a weak-perspective camera model, with calibrated camera matrix , the weak-perspective projection of the object 3D landmark points is given by
This result is of course defined up to scale, since and , where , , , and . This will require that we standardize our variables when deriving our algorithm.
3.2 Deep 3D Shape Reconstruction from 2D Landmarks
3.2.1 Proposed Neural Network.
Given a training set with 3D landmark points , we aim to learn the function , that is,
where , and are obtained by standardizing , and as follows,
where , and are mean values, and , and
are the standard deviation of the elements in, and , respectively.
We standardize , and to eliminate the effect of scaling and translation of the 3D object, as noted above. By estimating using a training set, we learn the geometric property of the given object. Herein, we model the function using a multilayer neural network.
Figure 1 depicts the overall architecture of our neural network. It contains layers. The layer is defined as,
is an input vector,is the output vector, and specify the number of input and output nodes, respectively, and and are network parameters, with the former a weighting matrix and the latter a basis vector. Our neural network uses a Hyperbolic Tangent function, .
Our objective is to minimize the sum of Euclidean distances between the predicted depth location and the ground-truth of our 3D landmark points. Formally,
with the Euclidean metric.
We utilize the RMSProp algorithm to optimize our model parameters. In a multilayer neural network, the appropriate learning rates can vary widely during learning and between different parameters. RMSProp is a technique that updates parameters of a neural network to improve learning. It can adaptively adjust the learning rate of each parameter separately to improve convergence to a solution.
When testing on the object, we have and , and want to estimate , and . From Eq. (3) we have and . Thus, we first standardize the data,
This yields and . This means we can directly feed into the trained neural network to obtain its depth . Then, the 3D shape of the object can be recovered as , a result that is defined up to scale.
3.2.3 Missing Data.
To solve the problem of missing data, we add a recurrent layer  on top of the multi-layer neural network to jointly estimate both the 2D coordinates of missing 2D landmarks and their depth. This is illustrated in Figure 2. The module named “A” corresponds to the recurrent layer that estimates the 2D entries of the missing data, while “B” is the multi-layer neural network described before and summarized in Figure 1. The output of “A” is thus the full set of 2D landmarks and the output of “B” their corresponding depth values. The module “C” merges the outputs of “A” and “B” to generate the final output, , and
is the loss function used in this module.
In the recurrent network, we use the notation and to specify the estimated values of and at iteration . Here represents the sample. The input to our above defined network can then be written as , with specifying the initial input. If the values of and are missing (i.e., occluded in the image), then and are set to zero. Otherwise the values of and are standardized using Eq. (8) to obtain and .
In subsequent iterations, from to , if the landmark is not missing, and . If the landmark is missing, then , , where can be the identity function or a nonlinear function (e.g. ) and , , , are the parameters of the recurrent layer. In our experiments we find that being the identity function works best.
The number of iterations is set to , which yields , where and , as the final output of the recurrent layer. The vector is fixed by hand. By using the weighted sum of the output at each step rather than the output at the last step as final output of the recurrent layer, we can enforce intermediate supervision to make the recurrent layer gradually converge to the correct output.
3.2.4 Data augmentation approach.
In many applications the number of training samples (i.e., 2D and corresponding 3D landmark points) is small. However, any regressor designed to learn the mapping function requires of a large number of training samples with the 2D landmarks as seen from as many cameras, views (translation, rotation) and scales as possible. We resolve this with a simple, yet efficient data augmentation approach.
The key to our approach is to note that, for a given object, its 3D structure does not change. What changes are the 2D coordinates of the landmark points in the image. For example, scaling or rotating an object in 3D yields different 2D coordinates of the same object landmarks. Thus, our task is to generate as many of these possible sample views as possible. We do this with a camera model.
A camera model allows us to predict the 2D image coordinates of 3D landmark points. Here, we use an affine camera model to generate a very large number of images of the known 3D sample objects. We do this by varying the intrinsic parameters of the camera model (e.g., focal length) as well as the extrinsic parameters (e.g., 3D translation, rotation and scale). Specifically, we use the weak-perspective camera model defined above.
We also use this data augmentation step to model imprecisely localized 2D landmark points. All detection algorithms include a detection error (even when fiducial detections are done by humans) 
. We address this problem by modeling the detection error as Gaussian noise, with zero mean and variance. Specifically, we use a small variance equivalent to about 3% of the size of the object. This means that, in addition to the 2D landmark points given by the camera models used above, we will incorporate 2D landmark points that have been altered by adding this random Gaussian noise. This allows our neural network to learn to accurately recover the 3D shape of an object from imprecisely localized 2D landmark points.
It is important to note that, when the original training set is small, we can still train an efficiently-performing neural network using this trick. In fact, we have found experimentally that we do not need a large number of training samples to obtain extremely low reconstruction errors using our derived approach and this data augmentation trick. When the number of samples is large, this approach can help reduce the 3D reconstruction error by incorporating intrinsic or extrinsic camera parameters and detection errors not well represented in the samples.
3.2.5 Implementation Details.
Our feed-forward neural network contains six layers. The number of nodes in each layer is. We divide our training data into a training and a validation set. In each of these two sets, we perform data augmentation.
4 Experimental Results
We conduct experiments on a variety of databases to test the effectiveness of our algorithm. We used the following datasets: the CMU Motion Capture database , the fine-grained 3D car (FG3DCar) database , the Binghamton-Pittsburgh 4D Spontaneous Expression (BU3DFE) database [32, 33] and the flag flapping in the wind database . We also report our results on the sequestered dataset of the 3D Face Alignment in the Wild Challenge (3DFAW), done in conjunction with the 2016 European Conference on Computer Vision (ECCV), where the herein derived algorithm was a top performer.
Comparisons with the state-of-the-art demonstrates that our algorithm is significantly more accurate and efficient in recovering 3D shape from a single 2D image than previously defined methods. The 3D reconstruction error is evaluated using the Procrustes distance between the reconstructed shape and the ground-truth. Specifically, the ground-truth is given in millimeters (mm) which is normalized to a standard scale (given by the mean of all 3D landmark points) to make the error measure invariant to scale.
Also, to demonstrate the robustness of our method, we perform sensitivity analysis on these databases. Results show that our method is tolerant to moderate Gaussian noise. We also show results of how our method handles missing data.
Our derived neural network runs much faster than real-time, frames/s, on a 3.40 GHz Intel Core i7 desktop computer.
4.1 CMU Motion Capture Database
|Methods||Subject 13||Subject 14||Subject 15|
|Zhou et al. |
|Ramakrishna et al. |
The CMU motion capture database contains 3D human body joints locations of subjects performing various physical activities. Each body shape is defined by 15 3D landmark points. To provide a fair comparison, we follow the experimental setting of 
, where sequences of subject 86 are used for training and sequences of subjects 13, 14 and 15 are used for testing. During the training of the neural network, we split data of subject 86 into five folds and use four folds for training and the other fold for validation. We train the neural network in 10,000 epochs. We train one batch for at most 300 iterations within one epoch. The batch is either the original training data (the first epoch) or a random rotation of the original training data in 3D space. To represent real human shape in a 2D image, we set the rotation angle to be uniformly distributed within the range of [-20°, 20°] about theaxis, [-20°, 20°] about the axis and [-180°, 180°] about the axis.
Comparative results are in Table I. As shown in this table, our results are significantly more accurate than those given by previously defined algorithms. To further demosntrate the effectiveness and generality of our method, we randomly selected several 2D images from the human-in-the-wild dataset  and used the herein derived algorithm to recover the 3D shape of the human bodies in these images. The results are in Figure 3.
4.2 3D Face Reconstruction
|Database||This paper||Zhou et al. |
The BU3DFE database contains images of subjects performing six facial expressions in front of a 3D scanner. Each subject performed each expression 25 times, for a total of 3D sample images. Every sample has 83 annotated 3D landmarks. We randomly select 60 subjects for training, 10 for validation and the remaining 30 for testing.
We trained our neural network in epochs and used the same training strategy as described in the preceding section, with the difference that we restricted rotations about the axis in the range of [-60°, 60°] to more accurately reflect real head movement.
The mean testing error of our algorithm is . To provide comparative results against the state-of-the-art, we train and test the algorithm of  and use the same experimental setting described above. Comparative results are in Table II. These results show that the proposed algorithm outperform the state-of-the-art method of .
We also validate the cross database generality of our trained model on the Helen database of . Here, we randomly select some face images and manually label the face landmarks. The reconstructed 3D shapes are shown in Figure 4.
To provide an additional unbiased result, we participated in the 3DFAW competition111https://competitions.codalab.org/competitions/10261, which was conducted in conjunction with ECCV. Three of the four datasets in the challenge are subsets of MultiPIE , BU-4DFE  and BP4D-Spontaneous  databases respectively. Another dataset TimeSlice3D contains annotated 2D images that are extracted from online videos. In total, there are images. Each image has 66 labeled 3D fiducial points and a face bounding box centered around the mean 2D projection of the landmarks. The 2D to 3D correspondence assumes a weak-perspective projection. The depth values have been normalized to have zero mean. Because this competition required that we also estimate the 2D landmark points in the image, we incorporated an additional deep network to our architecture to the model summarized in Figure. 1 and trained it to detect 2D face landmarks .
Reconstruction error was reported by the organizers of the competition on a sequestered dataset to which we did not have prior access. Our algorithm was a top performers, with a significant margin over other methods. Herein, we also provide comparative results against the algorithm of  in Table II.
4.3 FG3DCar Database
This is a fine-grained 3D car database. It consists of 300 images of 30 different car models as seen from multiple views. Each car model has 10 images. Each image has 64 annotated 2D landmarks and a 3D shape reconstructed CAD (Computer-Aided Design) model. We adopt the default setting for training and testing, i.e., half of the 3D shapes of each car model is used for training, the other half is used for testing. In order to train our neural network, we further split the 3D training sample shapes into training (120 shapes) and validation (30 shapes) sets. Testing is conducted on the remaining images.
The neural network is trained for 100,000 epochs. We follow the same procedure used in the preceding two sections and train our neural network on one batch for at most 300 iterations in each epoch. The batch is either composed of the 120 training samples (the first epoch) or a random 3D rotation of the 120 training samples. We augment the validation set times using the data augmentation approach described above, resulting in a total of validation sample images.
4.4 Flag Flapping in the Wind Database
Flag flapping in the wind database  is a motion capture sequence of a flag waving in the wind. There are 450 frames, each of which has 540 vertices. We use the first 300 frames for training and the rest for testing. The network is trained in 30,000 epochs and we use the same procedure described in the preceding sections.
The mean testing error is . Figure 6 shows some of our reconstructed results compared with the ground-truth. The reconstructed 3D shape is shown using filled red circles and the 3D ground-truth with open green circles. As we can see in these results, the reconstructed and true shape are almost identical.
4.5 Noise and missing data
To determine how sensitive the proposed neural network is to inaccurate 2D landmark detections, we add independent random Gaussian noise with variance to the databases used in the preceding sections. Figure (a)a shows how little the performance degrades as increases when noise is added to the CMU Motion Capture database. The average height of subjects in this dataset is mm, meaning the variance of the noise is about . In Figure (a)a, we can see the robustness of the proposed algorithm to inaccurate 2D landmark positions. Figure (b)b shows the relative reconstruction error averaged across the testing subjects for each landmark with and without noise.
The results on the BU3DFE Face Database, FG3DCar Database and Flag Flapping in the Wind sequence when the data is distorted with Gaussian noise are shown in Figures (c)c-(h)h. The average width of the faces in BU3DFE is 140 mm, hence, the variance is . The mean width of the car models in FG3DCar is pixel, hence, the variance is . The mean width of the flags is mm, hence, the variance is .
Finally, we tested the ability of the trained system to deal with missing data. Here, each training and validation sample had one randomly selected landmark point missing; during training and testing. For CMU Motion Capture database, the average 3D reconstruction errors for subjects 13, 14 and 15 are 0.0413, 0.0396 and 0.0307, respectively. Figure (a)a shows qualitative results on three randomly selected images of humans in the wild. For BU3DFE Face Database, the mean reconstruction error is 0.006. Figure (b)b shows qualitative results on three randomly selected images of humans in the wild.
We have presented a very simple algorithm for the reconstruction of 3D shapes from 2D landmark points that yield extremely low reconstruction errors. Specifically, we proposed to use a feed-forward neural network to learn the mapping function between a set of 2D landmark points and an object’s 3D shape. The exact same neural network is used to learn the mappings of rigid (e.g., cars), articulated (e.g., human bodies), non-rigid (e.g., faces), and highly-deformable objects (e.g., flags). The system performs extremely well in all cases and yields results as much as two-fold better than previous state-of-the-art algorithms. This neural network runs much faster than real-time, frames/s, and can be trained with small sample sets.
This research was supported in part by the National Institutes of Health, grants R01-EY-020834 and R01-DC-014498, and by a Google Faculty Research Award to AMM.
X. Zhou, S. Leonardos, X. Hu, and K. Daniilidis, “3d shape estimation from 2d
landmarks: A convex relaxation approach,” in
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
-  V. Ramakrishna, T. Kanade, and Y. Sheikh, “Reconstructing 3d human pose from 2d image landmarks,” in ECCV. Springer, 2012, pp. 573–586.
-  Y.-L. Lin, V. I. Morariu, W. Hsu, and L. S. Davis, “Jointly optimizing 3d model fitting and fine-grained classification,” in ECCV. Springer, 2014, pp. 466–480.
-  A. Kar, S. Tulsiani, J. Carreira, and J. Malik, “Category-specific object reconstruction from a single image,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
-  O. C. Hamsici, P. F. U. Gotardo, and A. M. Martinez, “Learning spatially-smooth mappings in non-rigid structure from motion.” in ECCV, 2012, pp. 260–273.
-  J. Fayad, C. Russell, and L. Agapito, “Automated articulated structure and 3d shape recovery from point correspondences,” in Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011, pp. 431–438.
-  P. F. U. Gotardo and A. M. Martinez, “Kernel non-rigid structure from motion,” in Computer Vision (ICCV), 2011 IEEE International Conference on, 2011, pp. 802–809.
-  L. A. Jeni, J. F. Cohn, and T. Kanade, “Dense 3d face alignment from 2d videos in real-time,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, vol. 1. IEEE, 2015, pp. 1–8.
-  M. J. Leotta and J. L. Mundy, “Vehicle surveillance with a generic, adaptive, 3d vehicle model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 7, pp. 1457–1469, 2011.
-  H. J. Lee and Z. Chen, “Determination of 3D human body postures from a single view,” Computer Vision, Graphics, and Image Processing, vol. 30, pp. 148–168, 1985.
-  C. Russell, J. Fayad, and L. Agapito, “Energy based multiple model fitting for non-rigid structure from motion,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp. 3009–3016.
-  C. Russell, R. Yu, and L. Agapito, “Video pop-up: Monocular 3d reconstruction of dynamic scenes,” in European Conference on Computer Vision. Springer, 2014, pp. 583–598.
-  S. Rivera and A. M. Martinez, “Learning deformable shape manifolds,” Pattern Recognition, vol. 45, no. 4, pp. 1792–1801, 2012.
-  X. Xiong and F. De la Torre, “Supervised descent method and its applications to face alignment,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 532–539.
-  Y. Yang and D. Ramanan, “Articulated pose estimation with flexible mixtures-of-parts,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp. 1385–1392.
-  Google 3D Warehouse. https://3dwarehouse.sketchup.com/. Accessed: 2016-03-03.
-  J. Carreira, S. Vicente, L. Agapito, and J. Batista, “Lifting object detection datasets into 3d,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 7, pp. 1342–1355, 2016.
-  Y. Xiang, R. Mottaghi, and S. Savarese, “Beyond pascal: A benchmark for 3d object detection in the wild,” in IEEE Winter Conference on Applications of Computer Vision (WACV), 2014.
-  S. Vicente, J. Carreira, L. Agapito, and J. Batista, “Reconstructing pascal voc,” in Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on. IEEE, 2014, pp. 41–48.
-  M. Zhu, X. Zhou, and K. Daniilidis, “Single image pop-up from discriminatively learned parts,” in The IEEE International Conference on Computer Vision (ICCV), 2015.
-  X. Zhou, M. Zhu, S. Leonardos, and K. Daniilidis, “Sparse representation for 3d shape estimation: A convex relaxation approach,” arXiv preprint arXiv: 1509.04309, 2015.
-  R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge university press, 2003.
-  P. F. Gotardo and A. M. Martinez, “Computing smooth time trajectories for camera and deformable shape in structure from motion with occlusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 10, pp. 2051–2065, 2011.
-  I. Akhter, Y. Sheikh, S. Khan, and T. Kanade, “Trajectory space: A dual representation for nonrigid structure from motion,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 33, no. 7, pp. 1442–1456, 2011.
-  P. F. Gotardo and A. M. Martinez, “Kernel non-rigid structure from motion,” in 2011 International Conference on Computer Vision. IEEE, 2011, pp. 802–809.
T. Tieleman and G. Hinton, “Lecture 6.5-rmsprop: Divide the gradient by a
running average of its recent magnitude,” in
COURSERA: Neural Networks for Machine Learning, 2012.
Y. Bengio and F. Gingras, “Recurrent neural networks for missing or asynchronous data,”Advances in neural information processing systems, 1996.
-  L. Ding and A. M. Martinez, “Features versus context: An approach for precise and detailed detection and delineation of faces and facial features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 11, pp. 2022–2038, 2010.
-  F. Chollet, “keras,” https://github.com/fchollet/keras, 2015.
-  F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley, and Y. Bengio, “Theano: new features and speed improvements,” arXiv preprint arXiv:1211.5590, 2012.
-  MoCap. Carnegie mellon university graphics lab motion capture database. http://mocap.cs.cmu.edu/.
-  X. Zhang, L. Yin, J. F. Cohn, S. Canavan, M. Reale, A. Horowitz, P. Liu, and J. M. Girard, “Bp4d-spontaneous: a high-resolution spontaneous 3d dynamic facial expression database,” Image and Vision Computing, vol. 32, no. 10, pp. 692–706, 2014.
-  X. Zhang, L. Yin, J. F. Cohn, S. Canavan, M. Reale, A. Horowitz, and P. Liu, “A high-resolution spontaneous 3d dynamic facial expression database,” in Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on. IEEE, 2013, pp. 1–6.
-  R. White, K. Crane, and D. A. Forsyth, “Capturing and animating occluded cloth,” in ACM Transactions on Graphics (TOG), vol. 26, no. 3. ACM, 2007, p. 34.
-  D. Ramanan, “Learning to parse images of articulated bodies,” in Advances in Neural Information Processing Systems 19, B. Schölkopf, J. C. Platt, and T. Hoffman, Eds., 2007, pp. 1129–1136.
-  V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang, “Interactive facial feature localization,” in Proceedings of the 12th European Conference on Computer Vision - Volume Part III, ser. ECCV’12, 2012, pp. 679–692.
-  R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-pie,” Image and Vision Computing, vol. 28, no. 5, pp. 807–813, 2010.
-  L. Yin, X. Chen, Y. Sun, T. Worm, and M. Reale, “A high-resolution 3d dynamic facial expression database,” in Automatic Face & Gesture Recognition, 2008. FG’08. 8th IEEE International Conference On. IEEE, 2008, pp. 1–6.
-  X. Zhang, L. Yin, J. F. Cohn, S. Canavan, M. Reale, A. Horowitz, P. Liu, and J. M. Girard, “Bp4d-spontaneous: a high-resolution spontaneous 3d dynamic facial expression database,” Image and Vision Computing, vol. 32, no. 10, pp. 692–706, 2014.
-  R. Zhao, F. Benitez-Quiroz, Y. Wang, Y. Liu, and A. Martinez, “Fast and precise face alignment and 3d shape reconstruction from a single 2d image,” in Proceedings of the 14th European Conference on Computer Vision Workshop, ser. ECCVW’16, 2016.