Chirality Nets for Human Pose Regression

10/31/2019 ∙ by Raymond A. Yeh, et al. ∙ 10

We propose Chirality Nets, a family of deep nets that is equivariant to the "chirality transform," i.e., the transformation to create a chiral pair. Through parameter sharing, odd and even symmetry, we propose and prove variants of standard building blocks of deep nets that satisfy the equivariance property, including fully connected layers, convolutional layers, batch-normalization, and LSTM/GRU cells. The proposed layers lead to a more data efficient representation and a reduction in computation by exploiting symmetry. We evaluate chirality nets on the task of human pose regression, which naturally exploits the left/right mirroring of the human body. We study three pose regression tasks: 3D pose estimation from video, 2D pose forecasting, and skeleton based activity recognition. Our approach achieves/matches state-of-the-art results, with more significant gains on small datasets and limited-data settings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Human pose regression tasks such as human pose estimation, human pose forecasting and skeleton based action recognition, have numerous applications in video understanding, security and human-computer interaction. For instance, collaborative virtual reality applications rely on accurate pose estimation for which significant advances have been reported in recent years.

Specifically, recent state-of-the-art approaches use supervised learning to address pose regression and employ deep nets. Input and output of those nets depend on the task: inputs are typically 2D or 3D human pose key-points stacked into a vector; the output may represent human pose key-points for pose estimation or a classification probability for activity recognition. To improve accuracy of those tasks, a variety of deep net architectures have been proposed 

Martinez et al. (2017a); Chao et al. (2017); Hossain and Little (2018); Lee et al. (2018); Pavllo et al. (2019); Si et al. (2018), generally relying on common deep net building blocks, such as, fully connected, convolutional or recurrent layers. Unlike for image datasets, to enlarge the size of human pose datasets, a reflection (left-right flipping) of the pose coordinates as illustrated in step (1) of fig:sym_prop is not sufficient. The chirality of the human pose requires to additionally switch the labeling of left and right as illustrated in step (2) of fig:sym_prop.

However, while this two-step data augmentation is conceptually easy to employ during training, we argue that even better accuracy is possible for human pose regression tasks if this pose symmetry is directly built into the deep net. In particular, if confronted with either of the poses illustrated on the left or right hand side of fig:sym_prop the output of a deep net should be equivariant to the transformation, , the output is also transformed in a “predefined way.” For example, if the network’s output is also a human pose, the output pose should follow the same transformation. On the other hand, for an activity recognition task, the output probability should remain unchanged. The equivariant map, for pose estimation, is illustrated in fig:equi_prop and we make the equivariance property more precise later.

To encode this form of equivariance for human pose regression tasks, we propose “chirality nets.” Specifically, the output of a chirality net is guaranteed to be equivariant a transformation composed of reflections and label switching. To build chirality nets, we develop chirality equivariant versions of commonly used layers. Specifically, we design and prove equivariance for versions of fully connected, convolutional, batch-normalization, dropout, and LSTM/GRU layers and element-wise non-linearities such as tanh or soft-sign. The main common design principle for chirality equivariant layers is odd and even symmetric sharing of model parameters. Hence, in addition to being equivariant, transforming a typical deep net into its chiral counterpart results in a reduction of the number of trainable parameters, and lower computation complexity due to the symmetry in the model weights. We find a smaller number of trainable parameters reduces the sample complexity, , the models need less training data.

We demonstrate the generalization and effectiveness of our approach on three pose regression tasks over four datasets: 3D pose estimation on the Human3.6m Ionescu et al. (2014) and HumanEva dataset Sigal et al. (2010), 2D pose estimation on the Penn Action dataset Zhang et al. (2013) and skeleton-based action recognition on Kinetics-400 dataset Kay et al. (2017). Our approach achieves state-of-the-art results with guarantees on equivariance, lower number of parameters, and robustness in low-resource settings.

Figure 1: Illustration of the chirality transformation. The transformation includes two operations, (1) a reflection of the pose, , a negation of the x-coordinates; and (2) a switch of the left / right joint labeling. The ordering of the two operations are interchangeable.

2 Related Work

First we briefly review invariance and equivariance in machine learning and computer vision as well as human pose regression tasks.

Invariant and equivariant representation. Hand-crafted invariant and equivariant representations have been utilized widely in computer vision systems for decades, , scale invariance of SIFT Lowe et al. (1999), orientation invariance of HOG Dalal and Triggs (2005), affine invariance of the Harris detector Mikolajczyk and Schmid (2004), shift-invariant systems in image processing Vetterli et al. (2014), .

These properties have also been adapted to learned representations. A widely known property is the translation equivariance of convolutional neural nets (CNN) LeCun et al. (1999): through spatial or temporal parameter sharing, a shifted input leads to a shifted output. Group-equivariant CNNs extend the equivariance to rotation, mirror reflection and translation Cohen and Welling (2016) by replacing the shift operation with a more general set of transformations. Other representations for building equivariance into deep nets have also been proposed, , the Symmetric Network Gens and Domingos (2014), the Harmonic Network Worrall et al. (2017) and the Spherical CNN Cohen et al. (2018).

The aforementioned works focus on deep nets where the input are images. While related, they are not directly applicable to human pose. For example, a reflection with respect to the y-axis in the image domain corresponds to a permutation of the pixel locations, , swapping the pixel intensity between each pixel’s reflected counterpart. In contrast, for human pose, where the input is a vector representing the human joints’ spatial coordinates, a reflection corresponds to the negation of the value for each of the joints reflected dimension.

The input representation of deep nets for human pose is more similar to pointsets. Prior work has explored building permutation equivariant deep nets, , any permutation of input elements results in the same permutation of output elements.

In Zaheer et al. (2017); Qi et al. (2017). Both works utilize parameter sharing to achieve permutation equivariance. Following these works, graph nets generalize the family of permutation equivariant networks and demonstrate success on numerous applications Scarselli et al. (2009); Kipf and Welling (2017); Hamilton et al. (2017); Gilmer et al. (2017); Battaglia et al. (2018); Kipf et al. (2018); Yeh et al. (2019); Liu* et al. (2019).

For human pose, equivariance to all permutations is too strong of a property. Recall, our aim is to build models equivariant to the chiral symmetry, which only involves a specific permutation, , the switch between left and right joints, shown in step (2) of fig:sym_prop.

Most relevant to our approach is work by Ravanbakhsh et al. (2017). Ravanbakhsh et al. (2017) explore which type of equivariance can be achieved through parameter sharing. Their approach captures one specific permutation in the pose symmetric transform, but does not capture the negation from the reflection, shown in fig:sym_prop step (1). In contrast, our approach considers both operations (1) and (2) jointly, which leads to a different formulation. Lastly, to the best of our knowledge, Ravanbakhsh et al. (2017) only discusses theoretically the construction of equivariant networks. In this work, we design and implement a variety of building blocks for deep nets and demonstrate the benefits on a wide range of practical applications in human pose regression tasks.

Human pose applications. For 3D pose estimation from images, recent approaches utilize a two-step approach: (1) 2D pose keypoints are predicted given a video; (2) 3D keypoints are estimated given 2D joint locations. The 2D to 3D estimation is formulated as a regression task via deep nets Pavlakos et al. ; Tekin et al. (2017); Martinez et al. (2017b); Sun et al. (2017); Fang et al. (2018); Pavlakos et al. (2018); Yang et al. (2018); Luvizon et al. (2018); Hossain and Little (2018); Lee et al. (2018); Pavllo et al. (2019). Capturing the temporal information is crucial and has been explored in 3D pose estimation Hossain and Little (2018); Lee et al. (2018) as well as in action recognition Tran et al. (2018); Hussein et al. (2019), video segmentationHu et al. (2017, 2018) and learning object dynamics Martinez et al. (2017a); Minderer et al. (2019). Most recently, Pavllo et al. (2019) propose to use temporal convolutions to better capture the temporal information for 3D pose estimation over previous RNN based methods. They also performed train and test time augmentation based on the chiral-symmetric transformation. For test time augmentation, they compute the output for both the original input and the transformed input, using the average outputs as the final prediction. In contrast to our work, we note that Pavllo et al. (2019) need to transform the output of the transformed input back to the original pose. To carefully assess the benefits of chirality nets, in this work, we closely follow the experiment setup of Pavllo et al. (2019).

For 2D keypoint forecasting, we follow the setup of standard temporal modeling: conditioning on past observations to predict the future. To improve temporal modeling, recent works, have utilized different sequence to sequence models for this task Martinez et al. (2017a); Chao et al. (2017); Chiu et al. (2019). In this work, we closely follow the experiment setup of Chiu et al. (2019).

For action recognition, skeleton based methods have been explored extensively recently Yan et al. (2018); Zhang et al. (2018); Li et al. (2018); Si et al. (2018) due to robustness to illumination changes and cluttered background. Here we closely follow the experimental setup of Yan et al. (2018).


Figure 2: Illustration of chirality equivariance for the task of 2D to 3D pose estimation.

3 Chirality Nets

In the following we first provide the problem formulation for human pose regression, before defining chirality nets, equivariance and the chirality transform. Subsequently we discuss how to develop typical layers such as the fully connected layer, the convolution, , which make up chirality nets.

The Pytorch implementation and unit-tests of the proposed layers are part of the supplementary material. We have also included a short Jupyter notebook demo to illustrate the key concepts.

3.1 Problem Formulation

Chirality nets can be applied to regression tasks on coordinates of joints for human pose related task, , the input corresponds to 2D or 3D coordinates of human joints. For readability, we introduce the input and output representations for a single frame. Note that for our experiments we generalize chirality nets to multiple frames by introducing a time dimension.

We let denote the chirality net input, where is the set of all joints and is the dimension index set for an input coordinate. For example, and , for 2D input joint coordinates. Similarly, we let refer to the chirality net output. Note that the dimension of the spatial coordinates at the input and output may be different, , prediction from 2D to 3D. Also, the number of joints may differ, , when mapping between different key-point sets.

For human pose regression, the task is to learn the parameters of a model

by minimizing a loss function,

over the training dataset . Hereby, sample loss compares prediction to ground-truth .

3.2 Chirality Nets, Chirality Equivariance, and Chirality Transforms

Chirality nets exhibit chirality equivariance, , their output is transformed in a “predefined manner” given that the chirality transform is applied at the input. Note that the input and output dimensions and may differ. To define this chirality equivariance, we hence need to consider a pair of transformations, one for the input data, , and one for the output data, . The corresponding equivariance map is illustrated in fig:equi_prop for the task of 2D to 3D pose estimation. Formally, we say a function is chirality equivariant if

To define the chirality transform on the input data, , , we split the set of joints into ordered tuples of , , and , each denoting left, right and center joints of the input. Importantly, these tuples are sorted such that the corresponding left/right joints are at corresponding positions in the tuple. We also split the dimension index set into and , indicating the coordinates to, or not to, negate.

For readability and without loss of generality, assume the dimensions of the input follow the order of , , , , . Within each vector , we place the coordinates in the set before the remaining ones, , .

Given this construction of the input , the reflection illustrated in step (1) of fig:sym_prop is a matrix multiplication with a diagonal matrix , defined as follows:

where indicates a vector of ones of length .

The switch operation illustrated in step (2) of fig:sym_prop is a matrix multiplication with a permutation matrix of dimension , defined as follows:

where

denotes an identity matrix of size

.

Given those matrices, the chirality transform of the input is obtained via . The chirality transform of the output, , is defined similarly, replacing “” with “”.

In the following, we introduce layers that satisfy the chirality equivariance property. This enables to construct a chirality net , as the composition of equivariant layers remains equivariant. Note that chirality equivariance can be specified separately for every deep net layer which provides additional flexibility. In the following we discuss how to construct layers which satisfy chirality equivariance.

3.3 Chirality Layers

Fully connected layer. A fully connected layer performs the mapping . We achieve equivariance through parameter sharing and odd symmetry:

,

We color code the shared parameters using identical colors. Each denotes a matrix, where the first and the second subscript characterize the dimensions of the output and the input. For example, computes the output’s left () joint’s negated () dimensions, from the input’s right () joint’s non-negated, , positive (), dimensions. Note that is a matrix of dimension . We refer to this layer as the chiral fully connected layer.

1D convolution layers Waibel et al. (1995); LeCun et al. (1999). Pose symmetric 1D convolution layers can be based on fully connected layers. A 1D convolution is a fully connected layer with shared parameters across the time dimension, , at each time step the computation is the sum of fully connected layers over a window:

Consequently, we enforce equivariance at each time step by employing the symmetry pattern of fully connected layers at each time slice.

Element-wise nonlinearities. Nonlinearities are applied element-wise and do not contain parameters. These operations maintain the input dimension, therefore, and are identical. A nonlinearity that is an odd function, , , such as tanh, hardtanh, or soft-sign satisfies the equivariance property. See the following proof: T^out( f(x)) =& T^out_negT^out_swi (f(x)) elementwise f= T^out_negf(T^out_swix))
odd func. f=& f(T^out_negT^out_swi x) = f(T^in(x))    ∀xR^|J^in||D^in|. LSTM and GRU layers Hochreiter and Schmidhuber (1997); Cho et al. (2014).

LSTM and GRU modules which satisfy chirality can be obtained from fully connected layers.

However, naïvely setting all matrix multiplies within an LSTM to satisfy the equivariance property will not lead to an equivariant LSTM because gates are elementwise multiplied with the cell state. If both gate and cell preserve the negation then the product will not. Therefore, we change the weight sharing scheme for the gates. We set for the gates to be the empty set, , the gates will be invariant to negation at the input, , but still equivariant to the switch operation, . With this setup, the product of the gates and the cell’s output will preserve the sign, as the gates are invariant to negation and passed through a Sigmoid to be within the range of . GRU modules are modified in the same manner.

Batch-normalization Ioffe and Szegedy (2015).

A batch normalization layer performs an element-wise standardization, followed by an element-wise affine layer (with learnable parameters

and ). For and , we follow the the principle applied to fully connected layers.

Equivariance for , and

is obtained by computing the mean and standard deviation on the “augmented batch” and by keeping track of its running average.

Dropout Srivastava et al. (2014). At test time, dropout scales the input by , where is the dropout probability. The equivariance property is satisfied because of the associativity property of a scalar multiplication.

3.4 Reduction in model parameters, FLOPS, and training/test details

Model parameters. Our model shares parameters between dimensions representing the left and right joints. For each layer, the number of parameters are reduced by a factor of . Recall . The output dimension size is computed similarly.

FLOPS. Chirality nets also have lower FLOPS. Due to the symmetry, instead of multiplying and adding each of the elements independently, we add the symmetric values first before applying a single multiplication per symmetric pair. Concretely, consider , , and their inner product . Instead of computing , we exploit symmetry and use instead , which removes one multiplication operation. This is a common speed up trick used in symmetric FIR filters Note, Altera Application (1998); Yeh et al. (2016). The number of multiplications reduces by a factor of . Additionally, baseline models utilize test-time augmentation, which requires two forward passes through the network for each input, whereas the proposed nets only use a single forward pass.

Training and test details. During training it is important to apply the chirality transform for data-augmentation, , with 50% probability we apply and to input and label. This ensures that the mini-batch statistics match our assumption on the chirality, , poses that form a chiral pair are both valid, which is important for the batch-normalization layer. Moreover, during training we use a standard dropout layer. While we could impose dropped units to be chiral equivariant, we found this lead to over-fitting in practice. This is expected as imposing chirality on the added noise reduces the randomness. Importantly, during test no data-augmentation is performed and a single forward pass is sufficient to obtain an ‘averaged’ result.

(a)                                   (b)                                   (c)
Figure 3: Illustration of pose regression tasks: (a) 2D to 3D pose estimation; (b) 2D pose forecasting; and (c) skeleton-based action recognition.

4 Experiments

We evaluate our approach on a variety of tasks, including 2D to 3D pose estimation, 2D pose forecasting, and skeleton based action recognition. For each task, we describe the dataset, metric, and implementation before discussing the results.

Approach Dir. Disc. Eat Greet Phone Photo Pose Purch. Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg
Pavlakos Pavlakos et al. (2018) (CVPR‘18) 48.5 54.4 54.4 52.0 59.4 65.3 49.9 52.9 65.8 71.1 56.6 52.9 60.9 44.7 47.8 56.2
Yang Yang et al. (2018) (CVPR‘18) 51.5 58.9 50.4 57.0 62.1 65.4 49.8 52.7 69.2 85.2 57.4 58.4 43.6 60.1 47.7 58.6
Luvizon Luvizon et al. (2018) (CVPR‘18) () 49.2 51.6 47.6 50.5 51.8 60.3 48.5 51.7 61.5 70.9 53.7 48.9 57.9 44.4 48.9 53.2
Hossain Hossain and Little (2018) (ECCV‘18)(, ) 48.4 50.7 57.2 55.2 63.1 72.6 53.0 51.7 66.1 80.9 59.0 57.3 62.4 46.6 49.6 58.3
Lee Lee et al. (2018) (ECCV‘18)(, ) 40.2 49.2 47.8 52.6 50.1 75.0 50.2 43.0 55.8 73.9 54.1 55.6 58.2 43.3 43.3 52.8
Pavllo Pavllo et al. (2019) (CVPR‘19) 47.1 50.6 49.0 51.8 53.6 61.4 49.4 47.4 59.3 67.4 52.4 49.5 55.3 39.5 42.7 51.8
Pavllo Pavllo et al. (2019) (CVPR‘19)() 45.9 47.5 44.3 46.4 50.0 56.9 45.6 44.6 58.8 66.8 47.9 44.7 49.7 33.1 34.0 47.7
Pavllo Pavllo et al. (2019) (CVPR‘19)(, ) 45.2 46.7 43.3 45.6 48.1 55.1 44.6 44.3 57.3 65.8 47.1 44.0 49.0 32.8 33.9 46.8
Ours, single-frame 47.4 49.9 47.4 51.1 53.8 61.2 48.3 45.9 60.4 67.1 52.0 48.6 54.6 40.1 43.0 51.4
Ours () 44.8 46.1 43.3 46.4 49.0 55.2 44.6 44.0 58.3 62.7 47.1 43.9 48.6 32.7 33.3 46.7
Table 1: Results on the Human3.6M dataset: reconstruction error using Protocol 1 (MPJPE) in mm. The best result is boldface and the second best is underlined. indicates temporal models, uses ground-truth bounding box, and indicates test-time augmentation.
Walk Jog Box Avg. App. S1 S2 S3 S1 S2 S3 S1 S2 S3 - Pavlakos Pavlakos et al. 22.3 19.5 29.7 28.9 21.9 23.8 Pavlakos Pavlakos et al. (2018) 18.8 12.7 29.2 23.5 15.4 14.5 Lee Lee et al. (2018) 18.6 19.9 30.5 25.7 16.8 17.7 42.8 48.1 53.4 Pavllo Pavllo et al. (2019) 14.1 10.4 46.8 21.1 13.3 14.0 23.8 34.5 32.3 31.1 Pavllo Pavllo et al. (2019) () 13.9 10.2 46.6 20.9 13.1 13.8 23.8 33.7 32.0 30.8 Ours 15.2 10.3 47.0 21.8 13.1 13.7 22.8 31.8 31.0 30.6
Table 2: Results on HumanEva-I for multi-action (MA) models reported in Protocol 2 (P-MPJPE), lower the better. indicates test time augmentation.
Figure 4: Comparisons between our approach and Pavllo et al. (2019) in limited data settings evaluated using Protocol 1 on Human3.6M.

4.1 2D to 3D pose estimation

Task. 3D human pose estimation can be decoupled into the tasks of 2D keypoint detection and 2D to 3D pose estimation. We focus on the latter task, , given a sequence of 2D keypoints, the task is to estimate the corresponding 3D human pose. See fig:task_ill (a) for an illustration.

Dataset and metric. We evaluate on two standard datasets, the Human3.6M Ionescu et al. (2014) and the HumanEva-I Sigal et al. (2010). Human3.6M is a large scale dataset of human motion with 3.6 million video frames. The dataset consists of 11 subjects performing 15 different actions. Following prior work Pavlakos et al. ; Tekin et al. (2017); Martinez et al. (2017b); Sun et al. (2017); Luvizon et al. (2018); Pavllo et al. (2019), each human pose is represented by a 17-joint skeleton. We use the same train and test subject splits. HumanEva-I is a smaller dataset consisting of four subjects and six actions. To be consistent with prior work Pavlakos et al. (2018); Lee et al. (2018); Pavllo et al. (2019), we use the same train and test splits evaluated over the actions of (walk, jog, and box). For both of these datasets, we consider the setting where we train one model for all actions.

We report the two standard metrics used in prior work: Protocol 1 (MPJPE) which is the mean per-joint position error between the prediction and ground-truth Martinez et al. (2017b); Pavlakos et al. ; Pavllo et al. (2019) and Protocol 2 (P-MPJPE) which is the error, after alignment, between the prediction and ground-truth Martinez et al. (2017b); Sun et al. (2017); Hossain and Little (2018); Pavllo et al. (2019).

Implementation details. Our model follows the supervised training procedure and network design of Pavllo et al. (2019)

. Our network is the identical temporal convolutional network architecture, where each layer is replaced with its chiral version, , 1D dilated convolution, batch-normalization, and dropout layers. We also replace ReLU non-linearities with Tanh to achieve equivariance. No additional architecture changes were made. For Human3.6M, we use 2D keypoints extracted from CPN 

Chen et al. (2018) with Mask R-CNN He et al. (2017) bounding boxes released by Pavllo et al. (2019). For HumanEva-I, we use the 2D keypoint detections from Mask R-CNN released by Pavllo et al. (2019).

Results. In tab:human36_results, we report the performance on the Human3.6M data using Protocol 1 (MPJPE). Our approach outperforms the state-of-the-art Pavllo et al. (2019) which uses test-time augmentation by 0.1 mm in overall average and achieves the best results in eight out of fifteen sub-categories. For the single-frame models, we observe a more significant reduction in error of 0.4 mm over Pavllo et al. (2019) with test time augmentation. Additionally, when comparing without test-time augmentation, our approach outperforms by 1 mm. We note that, test-time augmentation employed by Pavllo et al. (2019) involves running the network twice for each input. In contrast, our approach only requires a single forward pass.

Next, on HumanEva-I dataset, we also observed an increase in performance using Protocol 1. On average, our approach achieves a 32.2mm error. This is a 0.8mm decrease over the current state-of-the-art of 33.0mm Pavllo et al. (2019) and a 1.1mm decrease over Pavllo et al. (2019) without test-time augmentation of 33.3mm.

We also performed evaluation using Protocol 2 (P-MPJPE). On Human3.6M we observe that our approach performs worse than Pavllo et al. (2019) by 0.3mm. We note that the loss function is chosen to optimize Protocol 1, therefore our models are performing better at what they are optimized for. In tab:human_eva_results, we report the performance on HumanEva-I using Protocol 2 (P-MPJPE). Our model achieves a 0.2 mm reduction in error over Pavllo et al. (2019) on average. Most of the gain is obtained for the boxing action, possibly due to the symmetric nature of the movement.

Limited data settings. A benefit of fewer model parameters is the potential to obtain better models with less data. To confirm this, we perform experiments by varying the amount of training data, starting from 0.1% of subject 1 (S1) to using three subjects S1, S5, S6. The results with comparison to Pavllo et al. (2019) are shown in fig:limited_data_results. We observe that our approach consistently out-performs Pavllo et al. (2019) in this low resource settings, except at S1 0.1%. For the reported numbers, we use a batch-size of 64, and all other hyper-parameters are identical between the models. If we further decrease the batch-size to 32 for S1 0.1%, our approach improves to 100.4mm where Pavllo et al. (2019) improves to 102.3mm.

Prediction Steps Avg.
Approach 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 -
Residual Martinez et al. (2017a) (CVPR‘17) 82.4 68.3 58.5 50.9 44.7 40.0 36.4 33.4 31.3 29.5 28.3 27.3 26.4 25.7 25.0 24.5 39.5
3D-PFNet Chao et al. (2017)(CVPR‘17) 79.2 60.0 49.0 43.9 41.5 40.3 39.8 39.7 40.1 40.5 41.1 41.6 42.3 42.9 43.2 43.3 45.5
TP-RNN Chiu et al. (2019) (WACV‘19) 84.5 72.0 64.8 60.3 57.2 55.0 53.4 52.1 50.9 50.0 49.3 48.7 48.3 47.9 47.6 47.3 55.6
Baseline w/o aug. 87.3 75.7 68.5 64.0 61.0 59.1 57.6 56.3 55.4 54.9 54.5 54.5 54.4 54.5 54.6 54.7 60.4
Baseline w/ aug. 86.9 75.2 67.9 63.5 60.4 58.4 57.0 55.8 55.1 54.5 54.1 54.0 53.9 53.9 54.0 54.0 59.9
Baseline w/ aug.() 87.0 75.5 68.4 64.1 61.0 59.1 57.5 56.3 55.5 55.0 54.7 54.7 54.6 54.7 54.7 54.7 60.5
Ours 87.5 77.0 68.7 64.2 61.2 59.2 57.6 56.5 55.7 55.1 54.7 54.6 54.4 54.5 54.5 54.5 60.6
Table 3: Results on Penn action dataset, performance reported in terms of PCK@0.05 (higher the better). () indicates using test time augmentation.

4.2 2D pose forecasting

Task. 2D pose forecasting is the pose regression task of predicting the future human pose, represented in 2D keypoints, given present and past human pose. See fig:task_ill (b) for an illustration.

Dataset and metric. We evaluate on the Penn Action dataset Zhang et al. (2013). The dataset consists of 2236 videos with 15 actions. Each frame is annotated with 2D keypoints of 13 human joints. We use the same train and test split as in Chao et al. (2017); Chiu et al. (2019). Following Chiu et al. (2019) we consider initial velocity as being part of the input and a single model is used for all actions. For a fair comparison with prior work, we report the ‘Percentage of Correct Keypoint’ metric with a 0.05 threshold (PCK@0.05), which assesses the accuracy of the predicted keypoints. A predicted keypoint is considered correct if it is within a 0.05 radius of the ground-truth when considering normalized distance.

Implementation details. Our non-chiral equivariant baseline model is a sequence-to-sequence model based on Martinez et al. (2017a)

. We made several modifications to match the hyperparameters in 

Chiu et al. (2019), , we used StackedRNN Pascanu et al. (2014) with 2 layers and added dropout layers. Additionally, we utilize teacher forcing Williams and Zipser (1989) during training, while prior work did not. We find this to stabilize training and enable the use of the Adam Kingma and Ba (2015); Reddi et al. (2018) optimizer without diverging. We performed data augmentation via the chirality transform, , with 0.5 probability we apply and to the input and the ground-truth correspondingly. For our pose symmetric model, we replaced all the non-symmetric layers, , fully connected layers and LSTM cells with their corresponding chiral version.

Results. In tab:penn_action_result, we report the performance of our models and the state-of-the-art. The baseline model without augmentation outperforms the state-of-the-art Chiu et al. (2019). The gain comes from the use of Stacked-LSTM and teacher forcing during training. With additional train and test time data-augmentation, our baseline model further improves. In addition our pose symmetric model outperforms the baseline, in terms of average PCK@0.05. We observe more significant improvements for the first ten prediction steps.

Approach Top-1 Top-5
Feature Encoding Fernando et al. (2015) 14.9% 25.8%
Deep LSTM Shahroudy et al. (2016) 16.4% 35.3%
Temporal-Conv Kim and Reiter (2017) 20.3% 40.0%
ST-GCN Yan et al. (2018) 30.7% 52.8%
Ours-Conv 30.8% 52.6%
Ours-Conv-Chiral 30.9% 53.0%
Table 4: Results of the skeleton based action recognition baselines on the Kinetics-400 dataset Kay et al. (2017) reported in Top-1 and Top-5 accuracy.

4.3 Skeleton based action recognition

Task. Skeleton based action recognition aims at predicting human action based on skeleton sequences. See fig:task_ill (c) for an illustration.

Dataset and metric. We use the Kinetics-400 dataset Kay et al. (2017) in our experiments. The dataset contains 400 action classes and 306,245 clips in total. Following the experimental setup by Yan et al. (2018), we use OpenPose Cao et al. (2018) to locate the 18 human body joints. Each joint is represented as , where and are the 2D coordinates of the joint and is the confidence score of the joint given by OpenPose. Following Kay et al. (2017), we report the classification accuracy at top-1 and top-5.

Implementation details. Our baseline model, ‘Ours-Conv,’ follows ‘Temporal-Conv’ Kim and Reiter (2017)

, modified to have not only temporal convolution but also spatial convolution. The temporal convolution considers the intra-frame information while the spatial convolution considers the inter-frame information. For the recognition task, we need chiral invariance, , a chiral pair should be classified as the same action class. To this end, we use a chiral invariance layer where we let both

, as well as be empty sets, which means there are no left and right joints but only center joints and there is no dimension that will be negated in the output of the layer after applying chirality transform. Note that the chirality transform exchanges the left and right joints and negates the dimensions in the dimension index set . Given , and are all empty, it’s trivial that the output will be chiral invariant. For the chiral invariance model, ‘Ours-Conv-Chiral,’ we replace all the non-symmetric layers before the chiral invariance layer with their corresponding chiral equivariance version. All the layers after the chiral invariance layer remain identical to the ‘Ours-Conv’ model. There are in total 10 layers of spatial and temporal convolution and we put the chiral invariance layer at the fourth layer. We use the SGD optimizer with a momentum of as in Yan et al. (2018).

Results. In tab:action_results, we report the action recognition performance of our model and the skeleton-based approaches. We observe that the baseline model ‘Ours-Conv’ performs on par with ST-GCN Yan et al. (2018) and the chiral invariant model, ‘Ours-Conv-Chiral’ outperforms both ST-GCN and Ours-Conv on Top-1 and Top-5 accuracy, achieving the state-of-the-art performance on the Kinetics-400 dataset among skeleton based action recognition methods.

5 Conclusion

We introduce chirality equivariance for pose regression tasks and develop deep net layers that satisfy this property. Through parameter sharing and odd/even symmetry, we design equivariant versions of commonly used layers in deep nets, including fully connected, 1D convolution, LSTM/GRU cells, and batch normalization layers. With these equivariant layers at hand, we build Chirality Nets, which guarantee equivariance from the input to the output. Our models naturally lead to a reduction in trainable parameters and computation due to symmetry. Our experimental results on three human pose regression tasks over four datasets demonstrate state-of-the-art performance and the wide practical impact of the proposed layers.

Acknowledgments: This work is supported in part by NSF under Grant No. 1718221 and MRI #1725729, UIUC, Samsung, 3M, Cisco Systems Inc. (Gift Award CG 1377144) and Adobe. We thank NVIDIA for providing GPUs used for this work and Cisco for access to the Arcetri cluster. RY is supported by a Google PhD Fellowship.

References

  • Battaglia et al. (2018) P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
  • Cao et al. (2018) Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh. OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008, 2018.
  • Chao et al. (2017) Y.-W. Chao, J. Yang, B. Price, S. Cohen, and J. Deng. Forecasting human dynamics from static images. In Proc. CVPR, 2017.
  • Chen et al. (2018) Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun. Cascaded pyramid network for multi-person pose estimation. In Proc. CVPR, 2018.
  • Chiu et al. (2019) H.-k. Chiu, E. Adeli, B. Wang, D.-A. Huang, and J. C. Niebles. Action-agnostic human pose forecasting. In Proc. WACV, 2019.
  • Cho et al. (2014) K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. EMNLP, 2014.
  • Cohen and Welling (2016) T. Cohen and M. Welling. Group equivariant convolutional networks. In Proc. ICML, 2016.
  • Cohen et al. (2018) T. S. Cohen, M. Geiger, J. Köhler, and M. Welling. Spherical cnns. In Proc. ICLR, 2018.
  • Dalal and Triggs (2005) N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proc. CVPR, 2005.
  • Fang et al. (2018) H.-S. Fang, Y. Xu, W. Wang, X. Liu, and S.-C. Zhu. Learning pose grammar to encode human body configuration for 3D pose estimation. In Proc. AAAI, 2018.
  • Fernando et al. (2015) B. Fernando, E. Gavves, J. M. Oramas, A. Ghodrati, and T. Tuytelaars. Modeling video evolution for action recognition. In Proc. CVPR, 2015.
  • Gens and Domingos (2014) R. Gens and P. M. Domingos. Deep symmetry networks. In Proc. NeurIPS, 2014.
  • Gilmer et al. (2017) J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. In Proc. ICML, 2017.
  • Hamilton et al. (2017) W. L. Hamilton, R. Ying, and J. Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
  • He et al. (2017) K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Proc. ICCV, 2017.
  • Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 1997.
  • Hossain and Little (2018) M. R. I. Hossain and J. J. Little. Exploiting temporal information for 3D human pose estimation. In Proc. ECCV, 2018.
  • Hu et al. (2017) Y.-T. Hu, J.-B. Huang, and A. G. Schwing. MaskRNN: Instance Level Video Object Segmentation. In Proc. NeurIPS, 2017.
  • Hu et al. (2018) Y.-T. Hu, J.-B. Huang, and A. G. Schwing. VideoMatch: Matching based Video Object Segmentation. In Proc. ECCV, 2018.
  • Hussein et al. (2019) N. Hussein, E. Gavves, and A. W. Smeulders. Timeception for complex action recognition. In Proc. CVPR, 2019.
  • Ioffe and Szegedy (2015) S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. ICML, 2015.
  • Ionescu et al. (2014) C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3D human sensing in natural environments. PAMI, 2014.
  • Kay et al. (2017) W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
  • Kim and Reiter (2017) T. S. Kim and A. Reiter. Interpretable 3D human action analysis with temporal convolutional networks. In Proc. CVPRW, 2017.
  • Kingma and Ba (2015) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proc. ICLR, 2015.
  • Kipf et al. (2018) T. Kipf, E. Fetaya, K.-C. Wang, M. Welling, and R. Zemel. Neural relational inference for interacting systems. In Proc. ICML, 2018.
  • Kipf and Welling (2017) T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proc. ICLR, 2017.
  • LeCun et al. (1999) Y. LeCun, P. Haffner, L. Bottou, and Y. Bengio. Object recognition with gradient-based learning. In Shape, contour and grouping in computer vision. 1999.
  • Lee et al. (2018) K. Lee, I. Lee, and S. Lee. Propagating lstm: 3D pose estimation based on joint interdependency. In Proc. ECCV, 2018.
  • Li et al. (2018) C. Li, Q. Zhong, D. Xie, and S. Pu. Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. In Proc. IJCAI, 2018.
  • Liu* et al. (2019) I.-J. Liu*, R. A. Yeh*, and A. G. Schwing.

    PIC: Permutation invariant critic for multi-agent deep reinforcement learning.

    In Proc. CORL, 2019. equal contribution.
  • Lowe et al. (1999) D. G. Lowe et al. Object recognition from local scale-invariant features. In Proc. ICCV, 1999.
  • Luvizon et al. (2018) D. C. Luvizon, D. Picard, and H. Tabia.

    2D/3D pose estimation and action recognition using multitask deep learning.

    In Proc. CVPR, 2018.
  • Martinez et al. (2017a) J. Martinez, M. J. Black, and J. Romero.

    On human motion prediction using recurrent neural networks.

    In Proc. CVPR, 2017a.
  • Martinez et al. (2017b) J. Martinez, R. Hossain, J. Romero, and J. J. Little. A simple yet effective baseline for 3D human pose estimation. In Proc. ICCV, 2017b.
  • Mikolajczyk and Schmid (2004) K. Mikolajczyk and C. Schmid. Scale & affine invariant interest point detectors. IJCV, 2004.
  • Minderer et al. (2019) M. Minderer, C. Sun, R. Villegas, F. Cole, K. Murphy, and H. Lee. Unsupervised learning of object structure and dynamics from videos. In Proc. NeurIPS, 2019.
  • Note, Altera Application (1998) Note, Altera Application. Implementing fir filters in flex devices. Altera Corporation, Feb, 1998. URL http://www.ee.ic.ac.uk/pcheung/teaching/ee3_dsd/fir.pdf.
  • Pascanu et al. (2014) R. Pascanu, C. Gulcehre, K. Cho, and Y. Bengio. How to construct deep recurrent neural networks. In Proc. ICLR, 2014.
  • (40) G. Pavlakos, X. Zhou, K. G. Derpanis, and K. Daniilidis. Coarse-to-fine volumetric prediction for single-image 3D human pose. In Proc. CVPR.
  • Pavlakos et al. (2018) G. Pavlakos, X. Zhou, and K. Daniilidis. Ordinal depth supervision for 3D human pose estimation. In Proc. CVPR, 2018.
  • Pavllo et al. (2019) D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli. 3D human pose estimation in video with temporal convolutions and semi-supervised training. In Proc. CVPR, 2019.
  • Qi et al. (2017) C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proc. CVPR, 2017.
  • Ravanbakhsh et al. (2017) S. Ravanbakhsh, J. Schneider, and B. Poczos. Equivariance through parameter-sharing. In Proc. ICML, 2017.
  • Reddi et al. (2018) S. Reddi, S. Kale, and S. Kumar. On the convergence of adam and beyond. In Proc. ICLR, 2018.
  • Scarselli et al. (2009) F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE Trans. Neural Netw., 2009.
  • Shahroudy et al. (2016) A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang. NTU RGB+ D: A large scale dataset for 3D human activity analysis. In Proc. CVPR, 2016.
  • Si et al. (2018) C. Si, Y. Jing, W. Wang, L. Wang, and T. Tan. Skeleton-based action recognition with spatial reasoning and temporal stack learning. In Proc. ECCV, 2018.
  • Sigal et al. (2010) L. Sigal, A. O. Balan, and M. J. Black. Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. IJCV, 2010.
  • Srivastava et al. (2014) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014.
  • Sun et al. (2017) X. Sun, J. Shang, S. Liang, and Y. Wei. Compositional human pose regression. In Proc. ICCV, 2017.
  • Tekin et al. (2017) B. Tekin, P. Márquez-Neila, M. Salzmann, and P. Fua. Learning to fuse 2D and 3D image cues for monocular body pose estimation. In Proc. ICCV, 2017.
  • Tran et al. (2018) D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri. A closer look at spatiotemporal convolutions for action recognition. In Proc. CVPR, 2018.
  • Vetterli et al. (2014) M. Vetterli, J. Kovačević, and V. K. Goyal. Foundations of signal processing. Cambridge University Press, 2014.
  • Waibel et al. (1995) A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang. Phoneme recognition using time-delay neural networks. Backpropagation: Theory, Architectures and Applications, 1995.
  • Williams and Zipser (1989) R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1989.
  • Worrall et al. (2017) D. E. Worrall, S. J. Garbin, D. Turmukhambetov, and G. J. Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proc. CVPR, 2017.
  • Yan et al. (2018) S. Yan, Y. Xiong, and D. Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proc. AAAI, 2018.
  • Yang et al. (2018) W. Yang, W. Ouyang, X. Wang, J. Ren, H. Li, and X. Wang. 3D human pose estimation in the wild by adversarial learning. In Proc. CVPR, 2018.
  • Yeh et al. (2016) R. A. Yeh, M. Hasegawa-Johnson, and M. N. Do.

    Stable and symmetric filter convolutional neural network.

    In Proc. ICASSP, 2016.
  • Yeh et al. (2019) R. A. Yeh, A. G. Schwing, J. Huang, and K. Murphy. Diverse generation for multi-agent sports games. In Proc. CVPR, 2019.
  • Zaheer et al. (2017) M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola. Deep sets. In Proc. NeurIPS, 2017.
  • Zhang et al. (2018) P. Zhang, J. Xue, C. Lan, W. Zeng, Z. Gao, and N. Zheng.

    Adding attentiveness to the neurons in recurrent neural networks.

    In Proc. ECCV, 2018.
  • Zhang et al. (2013) W. Zhang, M. Zhu, and K. G. Derpanis. From actemes to action: A strongly-supervised representation for detailed action understanding. In Proc. ICCV, 2013.

Appendix A Code and Test Cases

In the supplemental materials, we have included Pytorch implementation of the proposed layers. Each layer also comes with unit-tests validating the chirality-equivaraince. Please read the README.md for directory structures, usage and required dependencies. There is also a Jupyter notebook and it’s HTML output visualizing the concepts introduced in the paper.

Appendix B Additional Description for Equivariant Layers

b.1 Equivariant fully connected layers

Recall, we achieve equivariance through parameter sharing and odd symmetry.

A fully connected layer performs the mapping . Recall, we achieve equivariance through parameter sharing and odd symmetry:

,

Here, we prove that the design is chiral-equivariant. Through multiplying out the matrices, we can show = , as follows:

Proof:

then

With linear algebra,

observe that , which proves the claim.

b.2 Equivariant 1D convolution layers

1D convolution layers Waibel et al. (1995); LeCun et al. (1999). Pose symmetric 1D convolution layers can be based on fully connected layers. A 1D convolution is a fully connected layer with shared parameters across the time dimension, , at each time step the computation is the sum of fully connected layers over a window:

Consequently, we enforce equivariance at each time step by employing the symmetry pattern of fully connected layers at each time slice.

for all . The bias of a 1D convolution is identical to that of a fully connected layer, , the same bias is added for each time step. Hence the same parameter sharing is used.

b.3 Equivariant LSTM and GRU layers

LSTM and GRU modules which satisfy chirality can be obtained from fully connected layers. However, naïvely setting all matrix multiplies within an LSTM to satisfy the equivariance property will not lead to an equivariant LSTM because gates are elementwise multiplied with the cell state. If both gate and cell preserve the negation then the product will not. Therefore, we change the weight sharing scheme for the gates. We set for the gates to be the empty set, , the gates will be invariant to negation at the input, , but still equivariant to the switch operation, . With this setup, the product of the gates and the cell’s output will preserve the sign, as the gates are invariant to negation and passed through a Sigmoid to be within the range of . GRU modules are modified in the same manner.

More formally, the computation in an LSTM module are as follows:

(Input Gate)
(Output Gate)
(Forget Gate)
(Cell State)
(Recurrent State)

,

where denotes an element-wise sigmoid non-linearity.

Observe that the LSTM operations consist of fully connected layers. For the cell state’s parameters, , , we follow the weight sharing scheme discussed for fully connected layers.

Due the to multiplication in the cell state, we redesigned the parameter sharing for the input, output and forget gate, to be invariant to , by setting to be the empty set: no negation is needed for all dimension. This results in the following parameter sharing scheme for the parameters :

, .

This LSTM is chirality equivariant, as the computation of the cell state is equivariant. Other computations are linear combinations of chirality equivariant operations, which remains equivariant. We note that the chirality equivariant GRU module is modified by following the same sharing scheme for the gates.

b.4 Equivariant batch-norm layers

A batch normalization layer performs an element-wise standardization, followed by an element-wise affine layer (with learnable parameters and ):

Equivariance for , and is obtained by following the principle applied to fully connected layers: we achieve equivariance via parameter sharing and odd symmetry:

and .

Equivariance for , and is obtained by computing the mean and standard deviation on the “augmented batch” and by keeping track of its running average. Formally, given a batch of data,

,   .

b.5 Dropout.

At test time, dropout scales the input by , where is the dropout probability. The equivariance property is satisfied because of the associativity property of a scalar multiplication. The input and output dimension and symmetry of a dropout layer are identical. Therefore, and are identical. From the definition:

Hence, a dropout layer naturally satisfies the equivariance property. At training-time, we do not enforce equivariance for the dropped units, , we do not jointly drop symmetric units as we found this to prevent overfitting. This is likely application dependent.

Appendix C Additional Results

c.1 3D pose estimation

In tab:supp_eva_quan, we report the HumanEva-I for multi-action models evaluated on Protocol 1 (MPJPE). Our approach have benefits the most from the Boxing action while maintaing the performance on other actions. We also provide qualitative evaluation in fig:supp_eva_walk and fig:supp_eva_box. We observe that our model successfully estimates 3D poses from 2D key-points. We have also attached animations in the supplemental.

Walk Jog Box Avg.
App. S1 S2 S3 S1 S2 S3 S1 S2 S3 -
Pavllo Pavllo et al. (2019) 17.6 12.5 37.6 28.1 19.1 19.2 29.5 44.0 43.1 33.3
Pavllo Pavllo et al. (2019) () 17.5 12.3 37.4 27.7 19.0 19.0 27.7 43.4 42.5 33.0
Ours 18.9 12.3 38.1 28.5 18.1 18.2 27.1 40.9 40.2 32.2
Table A1: Results on HumanEva-I for multi-action (MA) models reported in Protocol 1 (MPJPE), lower the better. indicates test time augmentation.
Figure A1: Qualitative visualization of 2D to 3D pose estimation for the action “Walking" on HumanEva-I dataset.
Figure A2: Qualitative visualization of 2D to 3D pose estimation for the action “Boxing" on HumanEva-I dataset.

c.2 Skeleton based action recognition

In fig:supp_action, we show the visualization of the input skeleton sequences computed by OpenPose Cao et al. (2018) and the predicted action class by our chiral invariant skeleton based action recognition model.

Figure A3: Visualization of the input skeleton sequences and the corresponding predicted action classes of our method on the Kinetics-400 dataset Kay et al. (2017).

Appendix D Implementation Details

d.1 3D pose estimation

Implementation details. Our model follows the temporal convolutional architecture proposed by Pavllo et al. (2019), and replaced all layers with their chiral versions; code for the layers are attached in the supplemental as well. We also changed ReLU to tanh to achieve chiral equivariance. For the temporal models, we follow their 4 blocks design which has the receptive field of 243. For the single frame model, we follow their 3 blocks design. These models all contains 1020 hidden dimensions so it is a factor the number of joints, 17, this is slightly smaller than the 1024 used in Pavllo et al. (2019). We also use their data processing and batching stragety as described in Section 5 and Appendix A.5 of Pavllo et al. (2019). For training the model, we utilized the Adam optimizer with beta1=0.9 and beta2=0.9999. We decay the batch-normalizations’ momentum as suggested in Pavllo et al. (2019). Other details follows the publicly available implementation by Pavllo et al. (2019). We enforced chiral equivariance by choosing the to be of the hidden dimension. The for the input layer is 17 and the for the output layer is 17, as one for each joint.

d.2 2D pose forecasting

Implementation details.

The non-chiral equivariant baseline is a seq2seq model consisting of an encoder and decoder, which are stacked-LSTMs with hidden size of 1040 and 2 stacked layers. We trained using teacher forcing with the Adam optimizer. The batch-size is 256, and we trained for 30 epochs. Dropout is applied to the LSTMs’ hidden layer with drop probability of 0.5. Following prior works, we use max norm gradient clipping of 5, a learning rate of 0.005 with a decay of 0.95 every 2 epochs. The data processing and evaluation setting follows 

Chiu et al. (2019). Other details follows the publicly available implementation by Chiu et al. (2019). We enforced chiral equivariance by choosing the to be of the hidden dimension, as the output is two dimensional per joint.

d.3 Skeleton-based action recognition

Implementation details. The non-chiral version of the model, Ours-Conv, follows Temporal-Conv Kim and Reiter (2017) while we modified the model to have not only temporal convolution but also spatial convolution. There are ten spatial-temporal convolution blocks and each block we first perform spatial convolution and then temporal convolution. The temporal convolution considers the intra-frame information while the spatial convolution considers the inter-frame information. For the recognition task, we need chiral invariance, , a chiral pair should be classified as the same action class. To this end, we use a chiral invariance layer where we let both , as well as to be empty sets, which means there are no left and right joints but only center joints and there is no dimension that will be negated in the output of the layer after applying the chirality transform. Note that the chiral transformation exchange the left and right joints and negate the dimension in the index set . Given , and are all empty, it’s obvious that the output will be chiral invariance. For the chiral invariance model, Ours-Conv-Chiral, we replace the all the non-symmetric layers before the chiral invariance layer with their corresponding chiral equivariance version. All the layers after the chiral invariance layer remains the same as in the Ours-Conv model. Similar to Kim and Reiter (2017), there are in total 10 convolution blocks in Ours-Conv and we put the chiral invariance layer at the fourth layer. Also, we gradually reduce the ratio of the dimension to be negated () from to at the first layer, from to at the second layer and from to at the third layer. We use the SGD optimizer with a momentum of as in Yan et al. (2018) with a batch size of 256. We train the model for 90 epochs.