EventHPE: Event-based 3D Human Pose and Shape Estimation

08/15/2021 ∙ by Shihao Zou, et al. ∙ Shandong University University of Guelph University of Alberta 0

Event camera is an emerging imaging sensor for capturing dynamics of moving objects as events, which motivates our work in estimating 3D human pose and shape from the event signals. Events, on the other hand, have their unique challenges: rather than capturing static body postures, the event signals are best at capturing local motions. This leads us to propose a two-stage deep learning approach, called EventHPE. The first-stage, FlowNet, is trained by unsupervised learning to infer optical flow from events. Both events and optical flow are closely related to human body dynamics, which are fed as input to the ShapeNet in the second stage, to estimate 3D human shapes. To mitigate the discrepancy between image-based flow (optical flow) and shape-based flow (vertices movement of human body shape), a novel flow coherence loss is introduced by exploiting the fact that both flows are originated from the identical human motion. An in-house event-based 3D human dataset is curated that comes with 3D pose and shape annotations, which is by far the largest one to our knowledge. Empirical evaluations on DHP19 dataset and our in-house dataset demonstrate the effectiveness of our approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: An illustration of our approach, which takes as input a stream of events and only the first frame of gray-scale image from an event camera. Our goal is to estimate 3D human poses and shapes through time from the events stream as the sole data source given the beginning body posture in the first frame of gray-scale image, where optical flows are inferred from events as an intermediate step.

Visual human pose and shape estimation have played a critical role in computer vision, with numerous research activities over the years 

[18, 25]. Existing research efforts are predominantly based on images from the conventional RGB or RGB-D cameras [13, 15, 14, 29, 34]. Meanwhile, the recent development in event cameras [7] offers new opportunities. Its working mechanism is a paradigm shift from theses conventional frame-based cameras. Inspired by biological vision process, event cameras [7]

asynchronously measure per-pixel brightness changes, which enables them to best detect and capture object local motions. This new imaging paradigm has already stimulated a range of computer vision research activities, such as camera pose estimation 

[8], gesture recognition [1] and 3D reconstruction [24], as well as commercial interests spanning a range of use scenarios including robotics, augmented and virtual reality, and auto-driving applications [7]. However, its potential in estimating 3D human shape is rarely explored.

DHP19 [3] is an early work that estimates only 2-D poses by treating a packet of events as a static image. The recent effort of EventCap [28] is the first in estimating 3D human shapes from an event camera. However, in addition to the input events, EventCap relies on an additional stream of gray-scale image sequence as input, to establish the initial shape estimation at each time step. This motivates us to consider the problem of inferring 3D human shape from events as the major source of input: events are the sole source of input data to estimate 3D human shapes over time, given that the beginning shape is known or extracted from the first frame of gray-scale image. Fig. 1 provides an overview of our two-stage approach, called EventHPE.

Considering the fact that the two modalities, events and optical flow, are both closely related to human motions, and optical flow can provide explicit geometric information to describe human body movements, we place the inference of optical flow from events (i.e. FlowNet) in the first stage of our framework, which is trained without supervision. The inclusion of optical flow makes it feasible to estimate human poses and shapes mainly from events, which means we do not require a stream of gray-scale images as input in addition to the events. The second stage, denoted by ShapeNet, is to estimate shape variations over time, given the events and the inferred optical flows as input. A novel flow coherence loss is proposed to enforce consistency of image-based flow (optical flow) and shape-based flow (vertices movement of human body shape), as both modalities are originated from the same human motion.

Our main contributions are summarized below. (1) We present an approach to a new and challenging problem, estimating 3D human parametric shape mainly from events. We propose to leverage optical flow inferred from events to relieve the reliance on the gray-scale image sequence as additional input. A novel coherence loss is also introduced to ensure consistency between image-based flow (optical flow) and shape-based flow (vertices movement of human body shape). Empirical evaluations demonstrate the superior performance of our approach against several state of the arts. (2) A home-grown dataset is introduced, referred as Multi-Modality Human Pose and Shape Dataset (or MMHPSD) 111Our code and dataset are at https://github.com/JimmyZou/EventHPE.. It includes 240k frames, with each frame containing 12 images from multiple imaging modalities including event camera. To our knowledge, MMHPSD is the largest event-based 3D human pose and shape dataset, and is the first publicly available dataset of such type, since the dataset of EventCap [28] is not publicly available. The multi-modality property of MMHPSD also renders its great potential in facilitating existing and new research directions.

Figure 2: An overview of our EventHPE framework that consists of two stages. Events within time interval are accumulated as an events packet , and then aggregated to an event frame . (i) In stage one, our FlowNet infers optical flow from events, where event frames are fed into a CNN model to predict the optical flow

. (ii) At stage two, our ShapeNet takes in a sequence of event frames and its corresponding optical flows. A CNN module is used to extract the vectorized feature representation, which are passed to an RNN module to infer pose variations

during time interval . After articulating the beginning pose and shape , the body shape at each time point are subsequently estimated.

2 Related Work

Human pose and shape estimation. Human pose estimation from RGB or depth images has been extensively investigated in the past few years. The approaches prior to the deep learning era are primary dictionary learning-based [26, 31, 5]. This is followed by the wide-spread use of deep learning techniques with noticeable performance increments, include e.g. direct regression of 3D pose [19, 16], or lifting 2-D pose estimation to 3D [30, 27]. The progress in shape estimation is especially fueled by the development of SMPL [17], a statistical low-dimensional representation of human body shape that enables end-to-end human shape estimation. HMR [13] is a pioneering CNN based method that predicts human poses and shapes within the SMPL model from single RGB images. This is followed by [22, 29] that further incorporate rendered silhouettes and texture maps for improved performance. Temporal information has also been exploited by [14, 15] in inferring full-body poses and shapes from videos.

Event camera and applications. As an emerging bio-inspired imaging sensor, event camera [7] is different from conventional frame cameras in many ways. The most important concept of event camera is an event, represented as a triplet, , which measures a noticeable change of brightness of at a specific pixel location , the time of occurrence , and its polarity status , which only occurs when either increasing or decreasing brightness exceeds a preset threshold. Rather than capturing images at a fixed frame rate, events are asynchronously registered at per-pixel level in event camera. The stream of events is also spatially much sparser comparing to conventional frame cameras, where each image is densely packed with a full stack of per-pixel values. Hence event camera is capable of perceiving local motions in the scene as a stream of sparse and asynchronous events. Owing to its unique advantages of high temporal resolution, low latency, high dynamic range, and low power consumption, event camera has found applications in a growing list of computer vision tasks, including camera pose estimation [8], feature tracking [10], optical flow [33], multi-view stereo [24], gesture recognition [1], motion deblurring [12], among others.

Meanwhile, there has been little investigation into event camera based estimation of 3D articulation of human body pose and shape. The efforts of DHP19 [3] and EventCap [28] are perhaps the most related. In DHP19 [3], a CNN model is devised to estimate 2-D human pose; it unfortunately cannot output 3D pose and shape. EventCap [28] aims to capture 3D human motions, which however demands more than just event signals: a stream of gray-scale images is also a necessary part of the input. Its workflow starts with a pre-trained CNN-based pose and shape detection module that takes as input a stream of low-frequency gray-scale images; the detected results are then used as initial estimates to infill the intermediate poses to reconstruct high-frequency motion details constrained by event trajectories, following [10]; The poses are further refined using the silhouette information gathered from the events. Compared with EventCap, our work includes the optical flow inferred from events to alleviate the requirements of a stream of gray-scale images over time, which enables our method to estimate 3D human pose and shape from events as the major input data source feasible.

3 Our Approach

Event camera provides a stream of event signals, with an event being a triplet of . At the same time, event camera typically outputs low-frame-rate gray-scale images. In our work, the stream of events is decomposed into a sequence of event packets, , where an event packet represents the set of events collected from time to , as shown in Fig. 2. Then, we divide each events packet into sequential subsets in temporal order, and each subset is aggregated to be one channel of the event frame . [9] Thus an event frame consists of channels which are in temporal order. Intuitively, our representation of temporal channels is simple yet effective to include temporal information for human pose estimation. Besides, the event camera is assumed to be static and its intrinsic parameters are known.

Human shape is represented by the SMPL model [17]. SMPL model is a differentiable function, denoted by . Given the shape parameters and pose parameters , the model outputs a triangular mesh with 6,890 vertices. The shape parameters are the linear coefficients of a PCA shape space that mainly determines individual body features such height, weight and body proportions. The PCA shape space is learned from a large dataset of body scans. are the pose parameters that mainly describe the articulated pose, consisting of one global rotation of the body and the relative rotations of 24 joints in axis-angle representation. Finally, the human shape is produced by first applying shape-dependent and pose-dependent deformations to the template body, then using forward-kinematics to articulate the template body shape to its target pose, and deforming the surface mesh by linear blend skinning. At the same time, the 3D and 2-D joint positions, and

, can be obtained by linear regression from the output mesh vertices and projection of 3D joints.

Our method, EventHPE, is summarized in Fig. 2, which consists of two stages. (i) The first stage, described in Sec. 3.1, is the optical flow inference from events. The event stream is first converted into a sequence of event frames and the event frames are fed into a CNN model to predict the corresponding optical flows. (ii) The second stage, described in Sec. 3.2, is the estimation of shapes through time. A sequence of event frames together with corresponding optical flows are passed through a CNN model to extract their vectorized feature representations, and then fed into an RNN model to estimate the pose variations and global translation variations across each time interval. In this work, we assume that the beginning pose and shape is known. If not, we can extract the beginning pose and shape by pre-trained CNN-based models, such VIBE [15], similarly to the previous work [28]. Note that we only need the pose and shape at the beginning time or for the first frame of gray-scale image. Finally, the estimated shapes through time can be obtained accordingly.

3.1 Unsupervised Learning of Optical Flow

Based on the observation where the two types of modalities, events and optical flow, are closely related to human motions on an image, we propose to introduce optical flow inferred from events in our method to provide more explicitly geometric clues. Using the event frame as the input, we train a CNN model, denoted by FlowNet, to predict the optical flow

. The FlowNet model is like an encoder-decoder architecture and can be trained by unsupervised learning. The loss functions used to train the model, similar to 

[33], include a photo-metric and a smoothness loss where photo-metric loss describes the pixel-intensity difference between warped and target image, and smoothness loss describes the difference between each in-pixel flow with its neighboring pixels’ flows. More details can be found in supplementary materials.

3.2 Pose and Shape Estimation

For each time interval , the event frame and its corresponding optical flow are concatenated together, and then fed into a CNN model to extract its vectorized feature representation. A sequence of these temporal features is passed through a GRU model to obtain the desired outputs, including inter-frame pose variations and global translation variations for each time interval. After articulating the beginning pose and shape, we can obtain the estimated shapes sequentially.

Specifically, the predicted global translation at time can be obtained by , which leads to the translation loss,

(1)

where is the target global translation at time .

As for the predicted pose at time , instead of using the 3D axis-angle representation of relative rotations of 24 joints as the pose in SMPL model, we propose to use 6D representation of rotations as the pose, which shows better performance for estimating human pose than axis angles [32]. The -th relative rotation at time is given by

(2)

where is the function that transforms the 6D rotational representation to the rotation matrix. Instead of using Euclidean distance, we propose to use geodesic distance in to measure the distance between the predicted and target poses, which is defined as

(3)

We also consider the position errors of 3D and 2-D joints, which are given by

(4)
(5)

where is the predicted 3D position of joint and is the projection function.

Figure 3: An illustration of image-based and shape-based flows. Image-based flow is the optical flow; shape-based flow refers to the movements of projected vertices of human body shape in image. The two flows are separately shown in color images, with red arrows displaying local flow directions. Note the pixel-level flow directions from the two modalities should be consistent.

Finally, we propose a novel coherence loss of two types of flows, image-based flow (optical flow) and shape-based flow (vertices movement of human body shape). Both flows originate from human motions, and the coherence loss ensures the consistency between the 2-D optical flow and 3D shape vertices flow, thus serving to regularize the motion estimation problem. Specifically, the optical flow inferred from events in Sec 3.1 is the image-based flow, as is shown in Fig. 3. The shape-based flow is obtained by projecting two sequential human body shapes on the image and calculating the movements of corresponding vertices, that is

(6)

where . Correspondingly, the image-based flow for the shape vertices can be obtained via bilinear sampling on the optical flow, which is defined as

(7)

Then the coherence loss is defined as the cosine-distance between two types of flows as

(8)

where is the index of vertices and means the inner product.

In summary, we train the ShapeNet by minimizing the loss

(9)

where and are the weights of corresponding losses.

3.3 Our MMHPSD Dataset

An in-house multi-modality dataset, MMHPSD, has been curated to facilitate empirical evaluation of our approach. It is also in response to the fact that the only existing dataset, EventCap [28], is not publicly available.

Data Acquisition. In data acquisition stage, our multi-camera acquisition system contains 4 different imaging modalities, including one event camera, one polarization camera, and five RGB-D cameras. Specifically, the event camera is CeleX-V with resolution  [6]. Images from the frame-based cameras are all soft-synchronized with the gray-scale images of the event camera; the events between two sequential gray-scale images are then collected synchronously. 15 subjects are recruited for data acquisition, with 11 being male and 4 being female. Each subject performs 3 groups of actions (21 different actions in total) for 4 times, where each group includes actions of fast/medium/slow speed respectively. Finally, 12 short video clips are collected from each subject, with each video having around 1,300 frames in 15 FPS. This amounts to 180 videos in total, with each video lasting about 1.5 minutes. The average number of events for the dataset is around 1 million per second. Overall, our dataset consists of 240k frames, where each frame contains a set of images: a gray-scale image, a sequence of inter-frame events, a polarization image, and five RGB and depth images. Further details regarding the dataset can be found in the supplementary.

Annotation. The SMPL shape and pose are annotated mainly based on the five RGB-D cameras, as follows. For each frame, the 2-D joints of all the RGB images are detected by OpenPose [4]; the depth of each 2-D joint is then obtained by warping its corresponding depth image to the RGB counterparts. After aggregating the five-view initial 3D pose by taking the average, we fit the SMPL male model to the initial pose via 3D SMPLify-x [21] to obtain the initial SMPL parameters. For more precise annotations, the initial shape is fine-tuned by fitting to the point cloud collected from the five depth views using the L-BFGS algorithm [2], where the average distance of shape vertex to its nearest point in the point cloud is iteratively minimized [36, 35]. Exemplar annotated human shapes can be found in the supplementary.

Dataset Comparison. We compare our dataset with two existing event-based human motion datasets in Tab. 1. Our dataset has the largest amount of frames and events. Although the number of subjects and sequences is not as many as DHP19, our dataset is multi-modality and accurately provides both SMPL pose and shape annotations, which possesses great potential in other research scenarios.

Dataset Seq#/Sub# Frame# Pose Shape MM
DHP19 [3] 33/17 87k Yes No No
EventCap [28] 2/6 - No No No
MMHPSD (ours) 12/15 240k Yes Yes Yes
Table 1: A tally of existing event-based human motion datasets, compared in terms of number of action sequences (Seq#) per subject and number of subjects (Sub#), number of frames (Frame#), availability of annotated poses (Pose) and shapes (Shape), and multi-modality (MM).

4 Experiments

Models
Input
MPJPE
PA-MPJPE
PEL-MPJPE
PCKh@0.5
PVE
DHP19 [3]
E
80.08
74.55
131.73
0.80
-
DHP19
DHP19 + Flow
E+F
76.76
71.68
130.37
0.82
-
HMR [13]
G
-
64.78
95.32
0.61
-
VIBE [15]
V
-
50.86
73.10
0.76
-
EventCap(HMR) [28]
E+G
-
62.62
89.95
0.64
-
EventCap(VIBE) [28]
E+G
-
50.35
71.85
0.77
-
DHP19 [3]
E
72.42
65.87
74.04
0.81
-
EventHPE(HMR)
E+F
-
53.72
77.80
0.71
-
EventHPE(VIBE)
E+F
-
48.87
69.58
0.79
-
MMHPSD
EventHPE
E+F
71.79
43.90
54.96
0.85
53.90
EventHPE(w/o flow)
E
80.99
49.43
60.90
0.82
59.77
EventHPE(w/o flow loss)
E+F
78.48
47.36
57.09
0.83
56.58
EventHPE(w/o geodesic)
E+F
77.29
49.02
60.55
0.83
59.84
Ablation
EventHPE(w/o joints)
E+F
73.79
44.59
55.91
0.84
54.73
Table 2: Quantitative evaluations on DHP19 and MMHPSD datasets, and the ablation studies on MMHPSD dataset. The input sources are represented by events (E), optical flow (F), gray-scale image (G) and video (V). The unit of joint errors is millimeter and PCK is proportional value.
Figure 4: A sampled sequence of event frames and corresponding optical flows are shown in the first row; the second row shows each of the estimated shapes and two alternative views.

In this section, we first describe the implementation details for training, and explain the evaluation metrics reported. Then, we compare our method with several event-based, frame-based and video-based approaches. Finally, we present the ablation studies to show the effectiveness of individual components in our method.

Implementation Details. During training, we choose inter-frame events as one event packet and then transform the packet into a 4-channel event frame () with each channel aggregating events for about 15 milliseconds. We also tried 1, 2, 8 and empirically find gives better results. Event frames are resized to . Note that when testing, we do not have such constraint, that is the temporal length of an event packet can be dynamic, depending on the speed of generated events, since fast motion often generates events much more faster than slow motion. ResNet50 [11] is used as the backbone model of CNN, and one-layer GRU with 2048 hidden dimension is used as RNN in ShapeNet. The weights are set to respectively. The sequence length used in ShapeNet training and testing is , which is a sequence of approximately second length. The batch size used to train the models is set to be . The learning rate for FlowNet is for epochs, and for ShapeNet is for epochs and decays to for epochs. Models are trained on a single RTX 2080Ti.

Evaluation. Similar to previous works [15, 13], we report five different metrics, mean per joint position error (MPJPE), Procrustes-aligned MPJPE (PA-MPJPE), pelvis-aligned MPJPE (PEL-MPJPE), percent of correct key-points (PCKh@0.5) and per vertex error (PVE). PA-MPJPE compares predicted and target joints after rotation and translation alignment, while PEL-MPJPE compares after only translation alignment of two pelvis joints. For PCKh@0.5, the joint that had distance error less than 50% of the head bone length after pelvis alignment is considered as correct key-point. The distance error of each vertex of SMPL mesh is used to calculate PVE.

4.1 Empirical Results

Figure 5: Qualitative results of the comparison to baselines. Our method is able to rectify the poses and shapes estimation through time even if the beginning pose and shape given by HMR or VIBE is not correct. Note that our method only requires the pose and shape detection on the first frame of gray-scale image while EventCap requires on a sequence of images.

DHP19 dataset [3] only provides multi-view events stream and motion capture data of joints without gray-scale images and SMPL shape annotations. So we use this dataset to demonstrate the effect of optical flows in event-based pose estimation. We denote the method presented in [3] as DHP19, and DHP19+Flow means the input consists of both event frames and optical flows that are predicted by our FlowNet trained on MMHPSD dataset. The quantitative results in the first two rows of Tab. 2 show that the joint position error in terms of MPJPE and PA-MPJPE metrics decreases more than 3mm, while PEL-MPJPE does not show such a large increment. The reason is that we only detect 2-D joints instead of a whole body shape, which means that the pelvis translation alignment may cause larger distance error of other joints. The PCKh also shows consistent increase after optical flows are used as the other source of input data. The quantitative results on DHP19 dataset demonstrate the effect of our proposed idea to include optical flow to extract more explicit geometric information from events to help events-based pose estimation.

MMHPSD dataset provides a variety of data sources and well-aligned pose and shape annotations, which enables us to compare our event-based method with various baselines. Quantitative results are reported in Tab. 2 and qualitative results are shown in Fig. 5.

HMR [13] is used as a frame-based baseline, and VIBE [15] is applied as a video-based baseline. Note that HMR or VIBE predicts a weak camera model without global translation, and both of them use the neutral SMPL model, which is different from the male model used in MMHPSD. Therefore, the quantitative evaluations of MPJPE and PVE will not be reported.

Another three categories of event-based baselines are described as follows. DHP19 [3] is 2-D pose estimation method from events, and we assume that the ground-truth depths of detected 2-D joints are known for comparison in 3D. EventCap(HMR) or EventCap(VIBE) means that HMR or VIBE is used in EventCap [28]

to detect pose and shape on a sequence of gray-scale images as initial values. Since the authors have not published their code and evaluation dataset yet, we re-implement EventCap using PyTorch L-BFGS optimizer 

[2, 20] and PyTorch3D differential render [23]. As for our method, EventHPE(HMR) or EventHPE(VIBE) means HMR or VIBE is used to detect the beginning pose and shape on the first frame of gray-scale image, corresponding to the two cases of EventCap for comparison. EventHPE represents the case where the ground-truth pose and shape is used as the beginning.

Since DHP19 uses the ground-truth depths of detected 2-D joints to obtain 3D pose, we compare it to our method EventHPE for fair comparison. The quantitative results show that our method yields more than 20mm decrease of joint errors of PA-MPJPE and PEL-MPJPE, while less than 1mm joint errors of MPJPE. This could attributes to the factors that our method predicts a whole body shape with the topology as the constraint while DHP19 only detects the individual joints independently. Therefore, our method gives much lower joint errors after alignment.

Figure 6: Qualitative results of ablation studies. Optical flow and geodesic distance are both important in inferring accurate shapes.

For fair comparison of EventCap with our EventHPE, we can look at the quantitative results of EventCap(HMR) v.s. EventHPE(HMR) and EventCap(VIBE) v.s. EventHPE(VIBE). Note that our method only requires the extraction of pose and shape on the first frame of gray-scale image as priors while EventCap requires the extraction on all the gray-scale images over time. Though the accuracy of priors affects the performances of both methods, there are still noticeable improvements on joint errors or PCK. In the case of HMR, EventCap(HMR) presents about 2mm reduction of PA-MPJPE, 5mm reduction of PEL-MPJPE and 0.03 growth of PCK over HMR, while our method EventHPE(HMR) yields improvements of about 11mm, 17mm and 0.1 respectively, which are more than triple compared with EventCap(HMR). A similar trend can be observed in the case of VIBE. EventHPE(VIBE) presents about 2mm of PA-MPJPE, 3.5mm of PEL-MPJPE and 0.03 of PCK improvements over VIBE, and EventCap(VIBE) improves only 0.5mm , 1.5mm and 0.01 respectively. Though the improvements of performance in the case of VIBE is not as high as that of HMR, which may attribute to the factor that the performance of HMR allows a larger space of improvement than VIBE, the joint errors or PCK of EventCap and EventHPE in the case VIBE are consistently better than in the case of HMR. This observation show that the priors can affect the overall performance of both method, but our method is more robust than EventCap. The reason can be attributed to the setting where events are the only source of input given the beginning pose and shape, that is, events and the optical flows (predicted from events) provide a better chance to rectify the future prediction when incorrect beginning pose is given. On the contrary, EventCap is constrained to rectify by the incorrect initial estimates on the whole sequence of gray-scale images.

The qualitative results in Fig. 5 also demonstrate the effectiveness of our method. Eight examples are sampled from two test sequences and displayed in Fig. 5. We can see that even if the beginning pose and shape estimated by HMR or VIBE, displayed in the first row of each sequence, is not accurate or cannot align well with the subjects in the image, EventHPE can still rectify the following predictions to give well-aligned poses and shapes given the events and corresponding inferred optical flow. EventCap, however, can only slightly adjust the estimates as it requires the estimates of HMR or VIBE on every gray-scale image in a sequence, which constrains the pose adjustments to the in-correct ones across time.

4.2 Ablation Study

In this section, we conduct ablation studies to evaluate individual components in our method. EventHPE(w/o flow) means optical flow and flow coherence loss are not used in our method, EventHPE(w/o flow loss) means optical flow is used as the input while flow coherence loss is not included, EventHPE(w/o geodesic) means Euclidean distance of pose is use during training instead of geodesic distance, and EventHPE(w/o joint) means joints supervision is not used in training. The quantitative and qualitative results are shown in Tab. 2 and Fig. 6. When comparing these four ablation models with EventHPE, a key observation is that the joint errors and shape vertex error will increase about 6mm for the model without optical flow or geodesic distance, 3-4mm for the model without flow coherence loss, while only 1-2mm for the model without joints loss during training. The qualitative results in Fig. 6 further demonstrate that optical flow or geodesic distance plays an important role in our method. The model without optical flow or geodesic distance presents worse alignment of human body with the latent gray-scale images, which shows that geodesic distance is better than Euclidean distance to measure human pose distance in and optical flow with flow coherence loss can provide more explicit geometric information in human shape estimation from events.

5 Conclusion and Outlook

In this paper, we present an approach to estimate 3D human shapes from event camera. Empirical evaluations demonstrate the applicability and effectiveness of our method. A potential limitation of our work is that we need the beginning pose and shape provided or detected on the first frame of gray-scale images. For future work, we will focus on address the problem of inferring 3D shapes solely from event signals.

Acknowledgement

Thank all the volunteers who contribute to the dataset, and thank Shuang Wu and Wei Ji for their constructive advice. This work is supported by the University of Alberta Start-up grant, the the UAHJIC grants, and the NSERC Discovery Grant No. RGPIN-2019-04575.

References

  • [1] A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza, et al. (2017) A low power, fully event-based gesture recognition system. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 7243–7252. Cited by: §1, §2.
  • [2] R. Bollapragada, J. Nocedal, D. Mudigere, H. Shi, and P. T. P. Tang (2018)

    A progressive batching l-bfgs method for machine learning

    .
    In International Conference on Machine Learning, pp. 620–629. Cited by: §3.3, §4.1.
  • [3] E. Calabrese, G. Taverni, C. Awai Easthope, S. Skriabine, F. Corradi, L. Longinotti, K. Eng, and T. Delbruck (2019) Dhp19: dynamic vision sensor 3d human pose dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Cited by: §1, §2, Table 1, §4.1, §4.1, Table 2.
  • [4] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh (2019) OpenPose: realtime multi-person 2d pose estimation using part affinity fields. IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (1), pp. 172–186. Cited by: §3.3.
  • [5] C. Chen and D. Ramanan (2017) 3d human pose estimation = 2d pose estimation + matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7035–7043. Cited by: §2.
  • [6] S. Chen and M. Guo (2019-06) Live demonstration: celex-v: a 1m pixel multi-mode event-based sensor. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Cited by: Appendix B, §3.3.
  • [7] G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza (2020) Event-based vision: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (), pp. 1–30. Cited by: §1, §2.
  • [8] G. Gallego, J. E. Lund, E. Mueggler, H. Rebecq, T. Delbruck, and D. Scaramuzza (2017) Event-based, 6-dof camera tracking from photometric depth maps. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (10), pp. 2402–2412. Cited by: §1, §2.
  • [9] D. Gehrig, A. Loquercio, K. G. Derpanis, and D. Scaramuzza (2019) End-to-end learning of representations for asynchronous event-based data. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5633–5643. Cited by: §3.
  • [10] D. Gehrig, H. Rebecq, G. Gallego, and D. Scaramuzza (2018) Asynchronous, photometric feature tracking using events and frames. In European Conference on Computer Vision, pp. 750–765. Cited by: §2, §2.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §4.
  • [12] Z. Jiang, Y. Zhang, D. Zou, J. Ren, J. Lv, and Y. Liu (2020) Learning event-based motion deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3320–3329. Cited by: §2.
  • [13] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik (2018) End-to-end recovery of human shape and pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §1, §2, §4.1, Table 2, §4.
  • [14] A. Kanazawa, J. Y. Zhang, P. Felsen, and J. Malik (2019) Learning 3d human dynamics from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §1, §2.
  • [15] M. Kocabas, N. Athanasiou, and M. J. Black (2020-06) VIBE: video inference for human body pose and shape estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §1, §2, §3, §4.1, Table 2, §4.
  • [16] S. Li, W. Zhang, and A. B. Chan (2015) Maximum-margin structured learning with deep networks for 3d human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2848–2856. Cited by: §2.
  • [17] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black (2015) SMPL: a skinned multi-person linear model. ACM transactions on graphics (TOG) 34 (6), pp. 1–16. Cited by: §2, §3.
  • [18] T. Moeslund and E. Granum (2001) A survey of computer vision-based human motion capture. Computer Vision and Image Understanding 81 (3), pp. 231–268. Cited by: §1.
  • [19] S. Park, J. Hwang, and N. Kwak (2016)

    3D human pose estimation using convolutional neural networks with 2d pose information

    .
    In European Conference on Computer Vision, pp. 156–169. Cited by: §2.
  • [20] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. Cited by: §4.1.
  • [21] G. Pavlakos, V. Choutas, N. Ghorbani, T. Bolkart, A. A. Osman, D. Tzionas, and M. J. Black (2019) Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10975–10985. Cited by: §3.3.
  • [22] G. Pavlakos, L. Zhu, X. Zhou, and K. Daniilidis (2018) Learning to estimate 3d human pose and shape from a single color image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 459–468. Cited by: §2.
  • [23] N. Ravi, J. Reizenstein, D. Novotny, T. Gordon, W. Lo, J. Johnson, and G. Gkioxari (2020) Accelerating 3d deep learning with pytorch3d. arXiv:2007.08501. Cited by: §4.1.
  • [24] H. Rebecq, G. Gallego, E. Mueggler, and D. Scaramuzza (2018) EMVS: event-based multi-view stereo—3d reconstruction with an event camera in real-time. International Journal of Computer Vision 126 (12), pp. 1394–1414. Cited by: §1, §2.
  • [25] N. Sarafianos, B. Boteanu, B. Ionescu, and I. Kakadiaris (2016) 3D human pose estimation: a review of the literature and analysis of covariates. Computer Vision and Image Understanding 152, pp. 1–20. Cited by: §1.
  • [26] C. Wang, Y. Wang, Z. Lin, A. L. Yuille, and W. Gao (2014) Robust estimation of 3d human poses from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2361–2368. Cited by: §2.
  • [27] K. Wang, L. Lin, C. Jiang, C. Qian, and P. Wei (2020)

    3D human pose machines with self-supervised learning

    .
    IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (5), pp. 1069–1082. Cited by: §2.
  • [28] L. Xu, W. Xu, V. Golyanik, M. Habermann, L. Fang, and C. Theobalt (2020) EventCap: monocular 3d capture of high-speed human motions using an event camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4968–4978. Cited by: §1, §1, §2, §3.3, Table 1, §3, §4.1, Table 2.
  • [29] Y. Xu, S. Zhu, and T. Tung (2019) Denserac: joint 3d pose and shape estimation by dense render-and-compare. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7760–7770. Cited by: §1, §2.
  • [30] R. Zhao, Y. Wang, and A. M. Martinez (2017) A simple, fast and highly-accurate algorithm to recover 3d shape from 2d landmarks on a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (12), pp. 3059–3066. Cited by: §2.
  • [31] X. Zhou, M. Zhu, S. Leonardos, K. G. Derpanis, and K. Daniilidis (2016) Sparseness meets deepness: 3d human pose estimation from monocular video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4966–4975. Cited by: §2.
  • [32] Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li (2019)

    On the continuity of rotation representations in neural networks

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5745–5753. Cited by: §3.2.
  • [33] A. Zhu, L. Yuan, K. Chaney, and K. Daniilidis (2018-06) EV-flownet: self-supervised optical flow estimation for event-based cameras. In Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania. External Links: Document Cited by: Appendix A, §2, §3.1.
  • [34] H. Zhu, X. Zuo, H. Yang, S. Wang, X. Cao, and R. Yang (2021) Detailed avatar recovery from single image. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §1.
  • [35] X. Zuo, S. Wang, M. Gong, and L. Cheng (2021) Unsupervised 3d human mesh recovery from noisy point clouds. arXiv preprint arXiv:2107.07539. Cited by: §3.3.
  • [36] X. Zuo, S. Wang, J. Zheng, W. Yu, M. Gong, R. Yang, and L. Cheng (2020) Sparsefusion: dynamic human avatar modeling from sparse rgbd images. IEEE Transactions on Multimedia 23, pp. 1617–1629. Cited by: §3.3.

Appendix A Unsupervised Learning of Optical Flow

The FlowNet can be trained by unsupervised learning via warping loss of two sequential gray-scale images . The loss functions used to train the model, similar to [33], includes a photo-metric and a smoothness loss.

Given the predicted optical flow , the warped image can be obtained by warping the second image to the first image via bilinear sampling. The photo-metric loss describes the difference between and ,

(10)

where is the Charbonnier loss function defined as and is the 2D direction of the predicted flow . The Charbonnier loss is more robust than the absolute difference. The smoothness loss constraints the output flow by minimizing the difference of each in-pixel flow and its neighboring flows,

(11)

where is the neighbors of pixel .

To summarize, FlowNet is trained by minimizing the loss

(12)

Appendix B MMHPSD Dataset Details

Data Acquisition. In data acquisition stage, our multi-camera acquisition system has 12 cameras out of 4 different types of imaging modality: one event camera, one polarization camera, five-view RGB-D cameras. All the frame-based cameras are soft-synchronized with the gray-scale images of the event camera and the events between two sequential gray-scale images are collected synchronously. 15 subjects are recruited for the data acquisition, where 11 are male and 4 are female. Each subject is required to perform 3 groups of actions (21 different actions in total, as is shown in Fig. 3) for 4 times, where each group includes actions of fast/medium/slow speed respectively.

Finally, we collect 12 short videos for each subject and each video has around 1,300 frames with 15 FPS, that is 180 videos in total with each video lasting about 1.5 minutes. We conduct the annotations for each video and check manually whether the annotated shape aligns well with multi-view images. We abandon the unsatisfactory annotated shapes. Details on the number of frames per subject and number of annotated frames per subject are presented in Tab. 4. The event camera used is CeleX-V [6] with resolution 1280x800 and the sensor frequency is 20-70 MHz. The MIPI interface supports up to 2.4Gbps transfer rate while the parallel interface supports the maximum readout of 140M pixels/second. The average number of events for the dataset is around 1 million per second. Fig. 7 presents the layout of our multi-camera system and three annotated shapes as examples. Overall, our dataset consists of 240k frames with each frame including a gray-scale image and inter-frame events, a polarization image, five-view color and depth images.

group speed actions
1 medium jumping, jogging, waving hands, kicking legs, walk
2 fast boxing, javelin, fast running, shooting basketball, kicking football, playing tennis, playing badminton
3 slow warming up elbow/wrist ankle/pectoral, lifting down-bell, squating down, drinking water
Table 3: Types of actions in each group. Subjects are required to do each group of actions for 4 times. The order of actions each time is random.
subject
 gender
  # of original
 frames
  # of annotated
 frames
  # of discarded
 frames
1 male 15911 15911 0 (0.0%)
2 male 15803 15803 0 (0.0%)
3 male 16071 16071 0 (0.0%)
4 male 16168 16152 16 (0.01%)
5 male 16278 16262 16 (0.01%)
6 male 16715 16384 331 (2.0%)
7 female 16091 16091 0 (0.0%)
8 male 16257 15642 715 (4.4%)
9 male 15467 15461 6 (0.03%)
10 male 16655 16655 0 (0.0%)
11 male 16464 16443 21 (0.13%)
12 male 16186 16186 0 (0.0%)
13 female 16064 14562 1502 (9.4%)
14 female 15726 15166 560 (3.6%)
15 female 14193 14075 118 (0.8%)
total - 240049 236764 3285 (1.4%)
Table 4: Detail number of frames for each subject and the number of frames that have annotated SMPL pose and shape.
Figure 7: Layout of multi-camera acquisition system and three examples of annotated shapes rendered on multi-view images. The top left figure is the layout, and the other three figure present three examples of pose and shape annotation.