1 Introduction
Human pose regression tasks such as human pose estimation, human pose forecasting and skeleton based action recognition, have numerous applications in video understanding, security and humancomputer interaction. For instance, collaborative virtual reality applications rely on accurate pose estimation for which significant advances have been reported in recent years.
Specifically, recent stateoftheart approaches use supervised learning to address pose regression and employ deep nets. Input and output of those nets depend on the task: inputs are typically 2D or 3D human pose keypoints stacked into a vector; the output may represent human pose keypoints for pose estimation or a classification probability for activity recognition. To improve accuracy of those tasks, a variety of deep net architectures have been proposed
Martinez et al. (2017a); Chao et al. (2017); Hossain and Little (2018); Lee et al. (2018); Pavllo et al. (2019); Si et al. (2018), generally relying on common deep net building blocks, such as, fully connected, convolutional or recurrent layers. Unlike for image datasets, to enlarge the size of human pose datasets, a reflection (leftright flipping) of the pose coordinates as illustrated in step (1) of fig:sym_prop is not sufficient. The chirality of the human pose requires to additionally switch the labeling of left and right as illustrated in step (2) of fig:sym_prop.However, while this twostep data augmentation is conceptually easy to employ during training, we argue that even better accuracy is possible for human pose regression tasks if this pose symmetry is directly built into the deep net. In particular, if confronted with either of the poses illustrated on the left or right hand side of fig:sym_prop the output of a deep net should be equivariant to the transformation, , the output is also transformed in a “predefined way.” For example, if the network’s output is also a human pose, the output pose should follow the same transformation. On the other hand, for an activity recognition task, the output probability should remain unchanged. The equivariant map, for pose estimation, is illustrated in fig:equi_prop and we make the equivariance property more precise later.
To encode this form of equivariance for human pose regression tasks, we propose “chirality nets.” Specifically, the output of a chirality net is guaranteed to be equivariant a transformation composed of reflections and label switching. To build chirality nets, we develop chirality equivariant versions of commonly used layers. Specifically, we design and prove equivariance for versions of fully connected, convolutional, batchnormalization, dropout, and LSTM/GRU layers and elementwise nonlinearities such as tanh or softsign. The main common design principle for chirality equivariant layers is odd and even symmetric sharing of model parameters. Hence, in addition to being equivariant, transforming a typical deep net into its chiral counterpart results in a reduction of the number of trainable parameters, and lower computation complexity due to the symmetry in the model weights. We find a smaller number of trainable parameters reduces the sample complexity, , the models need less training data.
We demonstrate the generalization and effectiveness of our approach on three pose regression tasks over four datasets: 3D pose estimation on the Human3.6m Ionescu et al. (2014) and HumanEva dataset Sigal et al. (2010), 2D pose estimation on the Penn Action dataset Zhang et al. (2013) and skeletonbased action recognition on Kinetics400 dataset Kay et al. (2017). Our approach achieves stateoftheart results with guarantees on equivariance, lower number of parameters, and robustness in lowresource settings.
2 Related Work
First we briefly review invariance and equivariance in machine learning and computer vision as well as human pose regression tasks.
Invariant and equivariant representation. Handcrafted invariant and equivariant representations have been utilized widely in computer vision systems for decades, , scale invariance of SIFT Lowe et al. (1999), orientation invariance of HOG Dalal and Triggs (2005), affine invariance of the Harris detector Mikolajczyk and Schmid (2004), shiftinvariant systems in image processing Vetterli et al. (2014), .
These properties have also been adapted to learned representations. A widely known property is the translation equivariance of convolutional neural nets (CNN) LeCun et al. (1999): through spatial or temporal parameter sharing, a shifted input leads to a shifted output. Groupequivariant CNNs extend the equivariance to rotation, mirror reflection and translation Cohen and Welling (2016) by replacing the shift operation with a more general set of transformations. Other representations for building equivariance into deep nets have also been proposed, , the Symmetric Network Gens and Domingos (2014), the Harmonic Network Worrall et al. (2017) and the Spherical CNN Cohen et al. (2018).
The aforementioned works focus on deep nets where the input are images. While related, they are not directly applicable to human pose. For example, a reflection with respect to the yaxis in the image domain corresponds to a permutation of the pixel locations, , swapping the pixel intensity between each pixel’s reflected counterpart. In contrast, for human pose, where the input is a vector representing the human joints’ spatial coordinates, a reflection corresponds to the negation of the value for each of the joints reflected dimension.
The input representation of deep nets for human pose is more similar to pointsets. Prior work has explored building permutation equivariant deep nets, , any permutation of input elements results in the same permutation of output elements.
In Zaheer et al. (2017); Qi et al. (2017). Both works utilize parameter sharing to achieve permutation equivariance. Following these works, graph nets generalize the family of permutation equivariant networks and demonstrate success on numerous applications Scarselli et al. (2009); Kipf and Welling (2017); Hamilton et al. (2017); Gilmer et al. (2017); Battaglia et al. (2018); Kipf et al. (2018); Yeh et al. (2019); Liu* et al. (2019).
For human pose, equivariance to all permutations is too strong of a property. Recall, our aim is to build models equivariant to the chiral symmetry, which only involves a specific permutation, , the switch between left and right joints, shown in step (2) of fig:sym_prop.
Most relevant to our approach is work by Ravanbakhsh et al. (2017). Ravanbakhsh et al. (2017) explore which type of equivariance can be achieved through parameter sharing. Their approach captures one specific permutation in the pose symmetric transform, but does not capture the negation from the reflection, shown in fig:sym_prop step (1). In contrast, our approach considers both operations (1) and (2) jointly, which leads to a different formulation. Lastly, to the best of our knowledge, Ravanbakhsh et al. (2017) only discusses theoretically the construction of equivariant networks. In this work, we design and implement a variety of building blocks for deep nets and demonstrate the benefits on a wide range of practical applications in human pose regression tasks.
Human pose applications. For 3D pose estimation from images, recent approaches utilize a twostep approach: (1) 2D pose keypoints are predicted given a video; (2) 3D keypoints are estimated given 2D joint locations. The 2D to 3D estimation is formulated as a regression task via deep nets Pavlakos et al. ; Tekin et al. (2017); Martinez et al. (2017b); Sun et al. (2017); Fang et al. (2018); Pavlakos et al. (2018); Yang et al. (2018); Luvizon et al. (2018); Hossain and Little (2018); Lee et al. (2018); Pavllo et al. (2019). Capturing the temporal information is crucial and has been explored in 3D pose estimation Hossain and Little (2018); Lee et al. (2018) as well as in action recognition Tran et al. (2018); Hussein et al. (2019), video segmentationHu et al. (2017, 2018) and learning object dynamics Martinez et al. (2017a); Minderer et al. (2019). Most recently, Pavllo et al. (2019) propose to use temporal convolutions to better capture the temporal information for 3D pose estimation over previous RNN based methods. They also performed train and test time augmentation based on the chiralsymmetric transformation. For test time augmentation, they compute the output for both the original input and the transformed input, using the average outputs as the final prediction. In contrast to our work, we note that Pavllo et al. (2019) need to transform the output of the transformed input back to the original pose. To carefully assess the benefits of chirality nets, in this work, we closely follow the experiment setup of Pavllo et al. (2019).
For 2D keypoint forecasting, we follow the setup of standard temporal modeling: conditioning on past observations to predict the future. To improve temporal modeling, recent works, have utilized different sequence to sequence models for this task Martinez et al. (2017a); Chao et al. (2017); Chiu et al. (2019). In this work, we closely follow the experiment setup of Chiu et al. (2019).
3 Chirality Nets
In the following we first provide the problem formulation for human pose regression, before defining chirality nets, equivariance and the chirality transform. Subsequently we discuss how to develop typical layers such as the fully connected layer, the convolution, , which make up chirality nets.
The Pytorch implementation and unittests of the proposed layers are part of the supplementary material. We have also included a short Jupyter notebook demo to illustrate the key concepts.
3.1 Problem Formulation
Chirality nets can be applied to regression tasks on coordinates of joints for human pose related task, , the input corresponds to 2D or 3D coordinates of human joints. For readability, we introduce the input and output representations for a single frame. Note that for our experiments we generalize chirality nets to multiple frames by introducing a time dimension.
We let denote the chirality net input, where is the set of all joints and is the dimension index set for an input coordinate. For example, and , for 2D input joint coordinates. Similarly, we let refer to the chirality net output. Note that the dimension of the spatial coordinates at the input and output may be different, , prediction from 2D to 3D. Also, the number of joints may differ, , when mapping between different keypoint sets.
For human pose regression, the task is to learn the parameters of a model
by minimizing a loss function,
over the training dataset . Hereby, sample loss compares prediction to groundtruth .3.2 Chirality Nets, Chirality Equivariance, and Chirality Transforms
Chirality nets exhibit chirality equivariance, , their output is transformed in a “predefined manner” given that the chirality transform is applied at the input. Note that the input and output dimensions and may differ. To define this chirality equivariance, we hence need to consider a pair of transformations, one for the input data, , and one for the output data, . The corresponding equivariance map is illustrated in fig:equi_prop for the task of 2D to 3D pose estimation. Formally, we say a function is chirality equivariant if
To define the chirality transform on the input data, , , we split the set of joints into ordered tuples of , , and , each denoting left, right and center joints of the input. Importantly, these tuples are sorted such that the corresponding left/right joints are at corresponding positions in the tuple. We also split the dimension index set into and , indicating the coordinates to, or not to, negate.
For readability and without loss of generality, assume the dimensions of the input follow the order of , , , , . Within each vector , we place the coordinates in the set before the remaining ones, , .
Given this construction of the input , the reflection illustrated in step (1) of fig:sym_prop is a matrix multiplication with a diagonal matrix , defined as follows:
where indicates a vector of ones of length .
The switch operation illustrated in step (2) of fig:sym_prop is a matrix multiplication with a permutation matrix of dimension , defined as follows:
where
denotes an identity matrix of size
.Given those matrices, the chirality transform of the input is obtained via . The chirality transform of the output, , is defined similarly, replacing “” with “”.
In the following, we introduce layers that satisfy the chirality equivariance property. This enables to construct a chirality net , as the composition of equivariant layers remains equivariant. Note that chirality equivariance can be specified separately for every deep net layer which provides additional flexibility. In the following we discuss how to construct layers which satisfy chirality equivariance.
3.3 Chirality Layers
Fully connected layer. A fully connected layer performs the mapping . We achieve equivariance through parameter sharing and odd symmetry:
,
We color code the shared parameters using identical colors. Each denotes a matrix, where the first and the second subscript characterize the dimensions of the output and the input. For example, computes the output’s left () joint’s negated () dimensions, from the input’s right () joint’s nonnegated, , positive (), dimensions. Note that is a matrix of dimension . We refer to this layer as the chiral fully connected layer.
1D convolution layers Waibel et al. (1995); LeCun et al. (1999). Pose symmetric 1D convolution layers can be based on fully connected layers. A 1D convolution is a fully connected layer with shared parameters across the time dimension, , at each time step the computation is the sum of fully connected layers over a window:
Consequently, we enforce equivariance at each time step by employing the symmetry pattern of fully connected layers at each time slice.
Elementwise nonlinearities. Nonlinearities are applied elementwise and do not contain parameters. These operations maintain the input dimension, therefore, and are identical. A nonlinearity that is an odd function, , ,
such as tanh, hardtanh, or softsign satisfies the equivariance property. See the following proof:
T^out( f(x)) =& T^out_negT^out_swi (f(x)) elementwise f= T^out_negf(T^out_swix))
odd func. f=& f(T^out_negT^out_swi x)
= f(T^in(x)) ∀x∈R^J^inD^in. LSTM and GRU layers Hochreiter and Schmidhuber (1997); Cho et al. (2014).
LSTM and GRU modules which satisfy chirality can be obtained from fully connected layers.
However, naïvely setting all matrix multiplies within an LSTM to satisfy the equivariance property will not lead to an equivariant LSTM because gates are elementwise multiplied with the cell state. If both gate and cell preserve the negation then the product will not. Therefore, we change the weight sharing scheme for the gates. We set for the gates to be the empty set, , the gates will be invariant to negation at the input, , but still equivariant to the switch operation, . With this setup, the product of the gates and the cell’s output will preserve the sign, as the gates are invariant to negation and passed through a Sigmoid to be within the range of . GRU modules are modified in the same manner.
Batchnormalization Ioffe and Szegedy (2015).
A batch normalization layer performs an elementwise standardization, followed by an elementwise affine layer (with learnable parameters
and ). For and , we follow the the principle applied to fully connected layers.Equivariance for , and
is obtained by computing the mean and standard deviation on the “augmented batch” and by keeping track of its running average.
Dropout Srivastava et al. (2014). At test time, dropout scales the input by , where is the dropout probability. The equivariance property is satisfied because of the associativity property of a scalar multiplication.
3.4 Reduction in model parameters, FLOPS, and training/test details
Model parameters. Our model shares parameters between dimensions representing the left and right joints. For each layer, the number of parameters are reduced by a factor of . Recall . The output dimension size is computed similarly.
FLOPS. Chirality nets also have lower FLOPS. Due to the symmetry, instead of multiplying and adding each of the elements independently, we add the symmetric values first before applying a single multiplication per symmetric pair. Concretely, consider , , and their inner product . Instead of computing , we exploit symmetry and use instead , which removes one multiplication operation. This is a common speed up trick used in symmetric FIR filters Note, Altera Application (1998); Yeh et al. (2016). The number of multiplications reduces by a factor of . Additionally, baseline models utilize testtime augmentation, which requires two forward passes through the network for each input, whereas the proposed nets only use a single forward pass.
Training and test details. During training it is important to apply the chirality transform for dataaugmentation, , with 50% probability we apply and to input and label. This ensures that the minibatch statistics match our assumption on the chirality, , poses that form a chiral pair are both valid, which is important for the batchnormalization layer. Moreover, during training we use a standard dropout layer. While we could impose dropped units to be chiral equivariant, we found this lead to overfitting in practice. This is expected as imposing chirality on the added noise reduces the randomness. Importantly, during test no dataaugmentation is performed and a single forward pass is sufficient to obtain an ‘averaged’ result.
(a) (b) (c) 
4 Experiments
We evaluate our approach on a variety of tasks, including 2D to 3D pose estimation, 2D pose forecasting, and skeleton based action recognition. For each task, we describe the dataset, metric, and implementation before discussing the results.
Approach  Dir.  Disc.  Eat  Greet  Phone  Photo  Pose  Purch.  Sit  SitD.  Smoke  Wait  WalkD.  Walk  WalkT.  Avg 
Pavlakos Pavlakos et al. (2018) (CVPR‘18)  48.5  54.4  54.4  52.0  59.4  65.3  49.9  52.9  65.8  71.1  56.6  52.9  60.9  44.7  47.8  56.2 
Yang Yang et al. (2018) (CVPR‘18)  51.5  58.9  50.4  57.0  62.1  65.4  49.8  52.7  69.2  85.2  57.4  58.4  43.6  60.1  47.7  58.6 
Luvizon Luvizon et al. (2018) (CVPR‘18) ()  49.2  51.6  47.6  50.5  51.8  60.3  48.5  51.7  61.5  70.9  53.7  48.9  57.9  44.4  48.9  53.2 
Hossain Hossain and Little (2018) (ECCV‘18)(, )  48.4  50.7  57.2  55.2  63.1  72.6  53.0  51.7  66.1  80.9  59.0  57.3  62.4  46.6  49.6  58.3 
Lee Lee et al. (2018) (ECCV‘18)(, )  40.2  49.2  47.8  52.6  50.1  75.0  50.2  43.0  55.8  73.9  54.1  55.6  58.2  43.3  43.3  52.8 
Pavllo Pavllo et al. (2019) (CVPR‘19)  47.1  50.6  49.0  51.8  53.6  61.4  49.4  47.4  59.3  67.4  52.4  49.5  55.3  39.5  42.7  51.8 
Pavllo Pavllo et al. (2019) (CVPR‘19)()  45.9  47.5  44.3  46.4  50.0  56.9  45.6  44.6  58.8  66.8  47.9  44.7  49.7  33.1  34.0  47.7 
Pavllo Pavllo et al. (2019) (CVPR‘19)(, )  45.2  46.7  43.3  45.6  48.1  55.1  44.6  44.3  57.3  65.8  47.1  44.0  49.0  32.8  33.9  46.8 
Ours, singleframe  47.4  49.9  47.4  51.1  53.8  61.2  48.3  45.9  60.4  67.1  52.0  48.6  54.6  40.1  43.0  51.4 
Ours ()  44.8  46.1  43.3  46.4  49.0  55.2  44.6  44.0  58.3  62.7  47.1  43.9  48.6  32.7  33.3  46.7 
4.1 2D to 3D pose estimation
Task. 3D human pose estimation can be decoupled into the tasks of 2D keypoint detection and 2D to 3D pose estimation. We focus on the latter task, , given a sequence of 2D keypoints, the task is to estimate the corresponding 3D human pose. See fig:task_ill (a) for an illustration.
Dataset and metric. We evaluate on two standard datasets, the Human3.6M Ionescu et al. (2014) and the HumanEvaI Sigal et al. (2010). Human3.6M is a large scale dataset of human motion with 3.6 million video frames. The dataset consists of 11 subjects performing 15 different actions. Following prior work Pavlakos et al. ; Tekin et al. (2017); Martinez et al. (2017b); Sun et al. (2017); Luvizon et al. (2018); Pavllo et al. (2019), each human pose is represented by a 17joint skeleton. We use the same train and test subject splits. HumanEvaI is a smaller dataset consisting of four subjects and six actions. To be consistent with prior work Pavlakos et al. (2018); Lee et al. (2018); Pavllo et al. (2019), we use the same train and test splits evaluated over the actions of (walk, jog, and box). For both of these datasets, we consider the setting where we train one model for all actions.
We report the two standard metrics used in prior work: Protocol 1 (MPJPE) which is the mean perjoint position error between the prediction and groundtruth Martinez et al. (2017b); Pavlakos et al. ; Pavllo et al. (2019) and Protocol 2 (PMPJPE) which is the error, after alignment, between the prediction and groundtruth Martinez et al. (2017b); Sun et al. (2017); Hossain and Little (2018); Pavllo et al. (2019).
Implementation details. Our model follows the supervised training procedure and network design of Pavllo et al. (2019)
. Our network is the identical temporal convolutional network architecture, where each layer is replaced with its chiral version, , 1D dilated convolution, batchnormalization, and dropout layers. We also replace ReLU nonlinearities with Tanh to achieve equivariance. No additional architecture changes were made. For Human3.6M, we use 2D keypoints extracted from CPN
Chen et al. (2018) with Mask RCNN He et al. (2017) bounding boxes released by Pavllo et al. (2019). For HumanEvaI, we use the 2D keypoint detections from Mask RCNN released by Pavllo et al. (2019).Results. In tab:human36_results, we report the performance on the Human3.6M data using Protocol 1 (MPJPE). Our approach outperforms the stateoftheart Pavllo et al. (2019) which uses testtime augmentation by 0.1 mm in overall average and achieves the best results in eight out of fifteen subcategories. For the singleframe models, we observe a more significant reduction in error of 0.4 mm over Pavllo et al. (2019) with test time augmentation. Additionally, when comparing without testtime augmentation, our approach outperforms by 1 mm. We note that, testtime augmentation employed by Pavllo et al. (2019) involves running the network twice for each input. In contrast, our approach only requires a single forward pass.
Next, on HumanEvaI dataset, we also observed an increase in performance using Protocol 1. On average, our approach achieves a 32.2mm error. This is a 0.8mm decrease over the current stateoftheart of 33.0mm Pavllo et al. (2019) and a 1.1mm decrease over Pavllo et al. (2019) without testtime augmentation of 33.3mm.
We also performed evaluation using Protocol 2 (PMPJPE). On Human3.6M we observe that our approach performs worse than Pavllo et al. (2019) by 0.3mm. We note that the loss function is chosen to optimize Protocol 1, therefore our models are performing better at what they are optimized for. In tab:human_eva_results, we report the performance on HumanEvaI using Protocol 2 (PMPJPE). Our model achieves a 0.2 mm reduction in error over Pavllo et al. (2019) on average. Most of the gain is obtained for the boxing action, possibly due to the symmetric nature of the movement.
Limited data settings. A benefit of fewer model parameters is the potential to obtain better models with less data. To confirm this, we perform experiments by varying the amount of training data, starting from 0.1% of subject 1 (S1) to using three subjects S1, S5, S6. The results with comparison to Pavllo et al. (2019) are shown in fig:limited_data_results. We observe that our approach consistently outperforms Pavllo et al. (2019) in this low resource settings, except at S1 0.1%. For the reported numbers, we use a batchsize of 64, and all other hyperparameters are identical between the models. If we further decrease the batchsize to 32 for S1 0.1%, our approach improves to 100.4mm where Pavllo et al. (2019) improves to 102.3mm.
Prediction Steps  Avg.  
Approach  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16   
Residual Martinez et al. (2017a) (CVPR‘17)  82.4  68.3  58.5  50.9  44.7  40.0  36.4  33.4  31.3  29.5  28.3  27.3  26.4  25.7  25.0  24.5  39.5 
3DPFNet Chao et al. (2017)(CVPR‘17)  79.2  60.0  49.0  43.9  41.5  40.3  39.8  39.7  40.1  40.5  41.1  41.6  42.3  42.9  43.2  43.3  45.5 
TPRNN Chiu et al. (2019) (WACV‘19)  84.5  72.0  64.8  60.3  57.2  55.0  53.4  52.1  50.9  50.0  49.3  48.7  48.3  47.9  47.6  47.3  55.6 
Baseline w/o aug.  87.3  75.7  68.5  64.0  61.0  59.1  57.6  56.3  55.4  54.9  54.5  54.5  54.4  54.5  54.6  54.7  60.4 
Baseline w/ aug.  86.9  75.2  67.9  63.5  60.4  58.4  57.0  55.8  55.1  54.5  54.1  54.0  53.9  53.9  54.0  54.0  59.9 
Baseline w/ aug.()  87.0  75.5  68.4  64.1  61.0  59.1  57.5  56.3  55.5  55.0  54.7  54.7  54.6  54.7  54.7  54.7  60.5 
Ours  87.5  77.0  68.7  64.2  61.2  59.2  57.6  56.5  55.7  55.1  54.7  54.6  54.4  54.5  54.5  54.5  60.6 
4.2 2D pose forecasting
Task. 2D pose forecasting is the pose regression task of predicting the future human pose, represented in 2D keypoints, given present and past human pose. See fig:task_ill (b) for an illustration.
Dataset and metric. We evaluate on the Penn Action dataset Zhang et al. (2013). The dataset consists of 2236 videos with 15 actions. Each frame is annotated with 2D keypoints of 13 human joints. We use the same train and test split as in Chao et al. (2017); Chiu et al. (2019). Following Chiu et al. (2019) we consider initial velocity as being part of the input and a single model is used for all actions. For a fair comparison with prior work, we report the ‘Percentage of Correct Keypoint’ metric with a 0.05 threshold (PCK@0.05), which assesses the accuracy of the predicted keypoints. A predicted keypoint is considered correct if it is within a 0.05 radius of the groundtruth when considering normalized distance.
Implementation details. Our nonchiral equivariant baseline model is a sequencetosequence model based on Martinez et al. (2017a)
. We made several modifications to match the hyperparameters in
Chiu et al. (2019), , we used StackedRNN Pascanu et al. (2014) with 2 layers and added dropout layers. Additionally, we utilize teacher forcing Williams and Zipser (1989) during training, while prior work did not. We find this to stabilize training and enable the use of the Adam Kingma and Ba (2015); Reddi et al. (2018) optimizer without diverging. We performed data augmentation via the chirality transform, , with 0.5 probability we apply and to the input and the groundtruth correspondingly. For our pose symmetric model, we replaced all the nonsymmetric layers, , fully connected layers and LSTM cells with their corresponding chiral version.Results. In tab:penn_action_result, we report the performance of our models and the stateoftheart. The baseline model without augmentation outperforms the stateoftheart Chiu et al. (2019). The gain comes from the use of StackedLSTM and teacher forcing during training. With additional train and test time dataaugmentation, our baseline model further improves. In addition our pose symmetric model outperforms the baseline, in terms of average PCK@0.05. We observe more significant improvements for the first ten prediction steps.
Approach  Top1  Top5 

Feature Encoding Fernando et al. (2015)  14.9%  25.8% 
Deep LSTM Shahroudy et al. (2016)  16.4%  35.3% 
TemporalConv Kim and Reiter (2017)  20.3%  40.0% 
STGCN Yan et al. (2018)  30.7%  52.8% 
OursConv  30.8%  52.6% 
OursConvChiral  30.9%  53.0% 
4.3 Skeleton based action recognition
Task. Skeleton based action recognition aims at predicting human action based on skeleton sequences. See fig:task_ill (c) for an illustration.
Dataset and metric. We use the Kinetics400 dataset Kay et al. (2017) in our experiments. The dataset contains 400 action classes and 306,245 clips in total. Following the experimental setup by Yan et al. (2018), we use OpenPose Cao et al. (2018) to locate the 18 human body joints. Each joint is represented as , where and are the 2D coordinates of the joint and is the confidence score of the joint given by OpenPose. Following Kay et al. (2017), we report the classification accuracy at top1 and top5.
Implementation details. Our baseline model, ‘OursConv,’ follows ‘TemporalConv’ Kim and Reiter (2017)
, modified to have not only temporal convolution but also spatial convolution. The temporal convolution considers the intraframe information while the spatial convolution considers the interframe information. For the recognition task, we need chiral invariance, , a chiral pair should be classified as the same action class. To this end, we use a chiral invariance layer where we let both
, as well as be empty sets, which means there are no left and right joints but only center joints and there is no dimension that will be negated in the output of the layer after applying chirality transform. Note that the chirality transform exchanges the left and right joints and negates the dimensions in the dimension index set . Given , and are all empty, it’s trivial that the output will be chiral invariant. For the chiral invariance model, ‘OursConvChiral,’ we replace all the nonsymmetric layers before the chiral invariance layer with their corresponding chiral equivariance version. All the layers after the chiral invariance layer remain identical to the ‘OursConv’ model. There are in total 10 layers of spatial and temporal convolution and we put the chiral invariance layer at the fourth layer. We use the SGD optimizer with a momentum of as in Yan et al. (2018).Results. In tab:action_results, we report the action recognition performance of our model and the skeletonbased approaches. We observe that the baseline model ‘OursConv’ performs on par with STGCN Yan et al. (2018) and the chiral invariant model, ‘OursConvChiral’ outperforms both STGCN and OursConv on Top1 and Top5 accuracy, achieving the stateoftheart performance on the Kinetics400 dataset among skeleton based action recognition methods.
5 Conclusion
We introduce chirality equivariance for pose regression tasks and develop deep net layers that satisfy this property. Through parameter sharing and odd/even symmetry, we design equivariant versions of commonly used layers in deep nets, including fully connected, 1D convolution, LSTM/GRU cells, and batch normalization layers. With these equivariant layers at hand, we build Chirality Nets, which guarantee equivariance from the input to the output. Our models naturally lead to a reduction in trainable parameters and computation due to symmetry. Our experimental results on three human pose regression tasks over four datasets demonstrate stateoftheart performance and the wide practical impact of the proposed layers.
Acknowledgments: This work is supported in part by NSF under Grant No. 1718221 and MRI #1725729, UIUC, Samsung, 3M, Cisco Systems Inc. (Gift Award CG 1377144) and Adobe. We thank NVIDIA for providing GPUs used for this work and Cisco for access to the Arcetri cluster. RY is supported by a Google PhD Fellowship.
References
 Battaglia et al. (2018) P. W. Battaglia, J. B. Hamrick, V. Bapst, A. SanchezGonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
 Cao et al. (2018) Z. Cao, G. Hidalgo, T. Simon, S.E. Wei, and Y. Sheikh. OpenPose: realtime multiperson 2D pose estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008, 2018.
 Chao et al. (2017) Y.W. Chao, J. Yang, B. Price, S. Cohen, and J. Deng. Forecasting human dynamics from static images. In Proc. CVPR, 2017.
 Chen et al. (2018) Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun. Cascaded pyramid network for multiperson pose estimation. In Proc. CVPR, 2018.
 Chiu et al. (2019) H.k. Chiu, E. Adeli, B. Wang, D.A. Huang, and J. C. Niebles. Actionagnostic human pose forecasting. In Proc. WACV, 2019.
 Cho et al. (2014) K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. EMNLP, 2014.
 Cohen and Welling (2016) T. Cohen and M. Welling. Group equivariant convolutional networks. In Proc. ICML, 2016.
 Cohen et al. (2018) T. S. Cohen, M. Geiger, J. Köhler, and M. Welling. Spherical cnns. In Proc. ICLR, 2018.
 Dalal and Triggs (2005) N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proc. CVPR, 2005.
 Fang et al. (2018) H.S. Fang, Y. Xu, W. Wang, X. Liu, and S.C. Zhu. Learning pose grammar to encode human body configuration for 3D pose estimation. In Proc. AAAI, 2018.
 Fernando et al. (2015) B. Fernando, E. Gavves, J. M. Oramas, A. Ghodrati, and T. Tuytelaars. Modeling video evolution for action recognition. In Proc. CVPR, 2015.
 Gens and Domingos (2014) R. Gens and P. M. Domingos. Deep symmetry networks. In Proc. NeurIPS, 2014.
 Gilmer et al. (2017) J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. In Proc. ICML, 2017.
 Hamilton et al. (2017) W. L. Hamilton, R. Ying, and J. Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
 He et al. (2017) K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask rcnn. In Proc. ICCV, 2017.
 Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Schmidhuber. Long shortterm memory. Neural computation, 1997.
 Hossain and Little (2018) M. R. I. Hossain and J. J. Little. Exploiting temporal information for 3D human pose estimation. In Proc. ECCV, 2018.
 Hu et al. (2017) Y.T. Hu, J.B. Huang, and A. G. Schwing. MaskRNN: Instance Level Video Object Segmentation. In Proc. NeurIPS, 2017.
 Hu et al. (2018) Y.T. Hu, J.B. Huang, and A. G. Schwing. VideoMatch: Matching based Video Object Segmentation. In Proc. ECCV, 2018.
 Hussein et al. (2019) N. Hussein, E. Gavves, and A. W. Smeulders. Timeception for complex action recognition. In Proc. CVPR, 2019.
 Ioffe and Szegedy (2015) S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. ICML, 2015.
 Ionescu et al. (2014) C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3D human sensing in natural environments. PAMI, 2014.
 Kay et al. (2017) W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
 Kim and Reiter (2017) T. S. Kim and A. Reiter. Interpretable 3D human action analysis with temporal convolutional networks. In Proc. CVPRW, 2017.
 Kingma and Ba (2015) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proc. ICLR, 2015.
 Kipf et al. (2018) T. Kipf, E. Fetaya, K.C. Wang, M. Welling, and R. Zemel. Neural relational inference for interacting systems. In Proc. ICML, 2018.
 Kipf and Welling (2017) T. N. Kipf and M. Welling. Semisupervised classification with graph convolutional networks. In Proc. ICLR, 2017.
 LeCun et al. (1999) Y. LeCun, P. Haffner, L. Bottou, and Y. Bengio. Object recognition with gradientbased learning. In Shape, contour and grouping in computer vision. 1999.
 Lee et al. (2018) K. Lee, I. Lee, and S. Lee. Propagating lstm: 3D pose estimation based on joint interdependency. In Proc. ECCV, 2018.
 Li et al. (2018) C. Li, Q. Zhong, D. Xie, and S. Pu. Cooccurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. In Proc. IJCAI, 2018.

Liu* et al. (2019)
I.J. Liu*, R. A. Yeh*, and A. G. Schwing.
PIC: Permutation invariant critic for multiagent deep reinforcement learning.
In Proc. CORL, 2019. equal contribution.  Lowe et al. (1999) D. G. Lowe et al. Object recognition from local scaleinvariant features. In Proc. ICCV, 1999.

Luvizon et al. (2018)
D. C. Luvizon, D. Picard, and H. Tabia.
2D/3D pose estimation and action recognition using multitask deep learning.
In Proc. CVPR, 2018. 
Martinez et al. (2017a)
J. Martinez, M. J. Black, and J. Romero.
On human motion prediction using recurrent neural networks.
In Proc. CVPR, 2017a.  Martinez et al. (2017b) J. Martinez, R. Hossain, J. Romero, and J. J. Little. A simple yet effective baseline for 3D human pose estimation. In Proc. ICCV, 2017b.
 Mikolajczyk and Schmid (2004) K. Mikolajczyk and C. Schmid. Scale & affine invariant interest point detectors. IJCV, 2004.
 Minderer et al. (2019) M. Minderer, C. Sun, R. Villegas, F. Cole, K. Murphy, and H. Lee. Unsupervised learning of object structure and dynamics from videos. In Proc. NeurIPS, 2019.
 Note, Altera Application (1998) Note, Altera Application. Implementing fir filters in flex devices. Altera Corporation, Feb, 1998. URL http://www.ee.ic.ac.uk/pcheung/teaching/ee3_dsd/fir.pdf.
 Pascanu et al. (2014) R. Pascanu, C. Gulcehre, K. Cho, and Y. Bengio. How to construct deep recurrent neural networks. In Proc. ICLR, 2014.
 (40) G. Pavlakos, X. Zhou, K. G. Derpanis, and K. Daniilidis. Coarsetofine volumetric prediction for singleimage 3D human pose. In Proc. CVPR.
 Pavlakos et al. (2018) G. Pavlakos, X. Zhou, and K. Daniilidis. Ordinal depth supervision for 3D human pose estimation. In Proc. CVPR, 2018.
 Pavllo et al. (2019) D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli. 3D human pose estimation in video with temporal convolutions and semisupervised training. In Proc. CVPR, 2019.
 Qi et al. (2017) C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proc. CVPR, 2017.
 Ravanbakhsh et al. (2017) S. Ravanbakhsh, J. Schneider, and B. Poczos. Equivariance through parametersharing. In Proc. ICML, 2017.
 Reddi et al. (2018) S. Reddi, S. Kale, and S. Kumar. On the convergence of adam and beyond. In Proc. ICLR, 2018.
 Scarselli et al. (2009) F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE Trans. Neural Netw., 2009.
 Shahroudy et al. (2016) A. Shahroudy, J. Liu, T.T. Ng, and G. Wang. NTU RGB+ D: A large scale dataset for 3D human activity analysis. In Proc. CVPR, 2016.
 Si et al. (2018) C. Si, Y. Jing, W. Wang, L. Wang, and T. Tan. Skeletonbased action recognition with spatial reasoning and temporal stack learning. In Proc. ECCV, 2018.
 Sigal et al. (2010) L. Sigal, A. O. Balan, and M. J. Black. Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. IJCV, 2010.
 Srivastava et al. (2014) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014.
 Sun et al. (2017) X. Sun, J. Shang, S. Liang, and Y. Wei. Compositional human pose regression. In Proc. ICCV, 2017.
 Tekin et al. (2017) B. Tekin, P. MárquezNeila, M. Salzmann, and P. Fua. Learning to fuse 2D and 3D image cues for monocular body pose estimation. In Proc. ICCV, 2017.
 Tran et al. (2018) D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri. A closer look at spatiotemporal convolutions for action recognition. In Proc. CVPR, 2018.
 Vetterli et al. (2014) M. Vetterli, J. Kovačević, and V. K. Goyal. Foundations of signal processing. Cambridge University Press, 2014.
 Waibel et al. (1995) A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang. Phoneme recognition using timedelay neural networks. Backpropagation: Theory, Architectures and Applications, 1995.
 Williams and Zipser (1989) R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1989.
 Worrall et al. (2017) D. E. Worrall, S. J. Garbin, D. Turmukhambetov, and G. J. Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proc. CVPR, 2017.
 Yan et al. (2018) S. Yan, Y. Xiong, and D. Lin. Spatial temporal graph convolutional networks for skeletonbased action recognition. In Proc. AAAI, 2018.
 Yang et al. (2018) W. Yang, W. Ouyang, X. Wang, J. Ren, H. Li, and X. Wang. 3D human pose estimation in the wild by adversarial learning. In Proc. CVPR, 2018.

Yeh et al. (2016)
R. A. Yeh, M. HasegawaJohnson, and M. N. Do.
Stable and symmetric filter convolutional neural network.
In Proc. ICASSP, 2016.  Yeh et al. (2019) R. A. Yeh, A. G. Schwing, J. Huang, and K. Murphy. Diverse generation for multiagent sports games. In Proc. CVPR, 2019.
 Zaheer et al. (2017) M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola. Deep sets. In Proc. NeurIPS, 2017.

Zhang et al. (2018)
P. Zhang, J. Xue, C. Lan, W. Zeng, Z. Gao, and N. Zheng.
Adding attentiveness to the neurons in recurrent neural networks.
In Proc. ECCV, 2018.  Zhang et al. (2013) W. Zhang, M. Zhu, and K. G. Derpanis. From actemes to action: A stronglysupervised representation for detailed action understanding. In Proc. ICCV, 2013.
Appendix A Code and Test Cases
In the supplemental materials, we have included Pytorch implementation of the proposed layers. Each layer also comes with unittests validating the chiralityequivaraince. Please read the README.md for directory structures, usage and required dependencies. There is also a Jupyter notebook and it’s HTML output visualizing the concepts introduced in the paper.
Appendix B Additional Description for Equivariant Layers
b.1 Equivariant fully connected layers
Recall, we achieve equivariance through parameter sharing and odd symmetry.
A fully connected layer performs the mapping . Recall, we achieve equivariance through parameter sharing and odd symmetry:
,
Here, we prove that the design is chiralequivariant. Through multiplying out the matrices, we can show = , as follows:
Proof:
then
With linear algebra,
observe that , which proves the claim.
b.2 Equivariant 1D convolution layers
1D convolution layers Waibel et al. (1995); LeCun et al. (1999). Pose symmetric 1D convolution layers can be based on fully connected layers. A 1D convolution is a fully connected layer with shared parameters across the time dimension, , at each time step the computation is the sum of fully connected layers over a window:
Consequently, we enforce equivariance at each time step by employing the symmetry pattern of fully connected layers at each time slice.
for all . The bias of a 1D convolution is identical to that of a fully connected layer, , the same bias is added for each time step. Hence the same parameter sharing is used.
b.3 Equivariant LSTM and GRU layers
LSTM and GRU modules which satisfy chirality can be obtained from fully connected layers. However, naïvely setting all matrix multiplies within an LSTM to satisfy the equivariance property will not lead to an equivariant LSTM because gates are elementwise multiplied with the cell state. If both gate and cell preserve the negation then the product will not. Therefore, we change the weight sharing scheme for the gates. We set for the gates to be the empty set, , the gates will be invariant to negation at the input, , but still equivariant to the switch operation, . With this setup, the product of the gates and the cell’s output will preserve the sign, as the gates are invariant to negation and passed through a Sigmoid to be within the range of . GRU modules are modified in the same manner.
More formally, the computation in an LSTM module are as follows:
(Input Gate)  
(Output Gate)  
(Forget Gate)  
(Cell State)  
(Recurrent State) 
,
where denotes an elementwise sigmoid nonlinearity.
Observe that the LSTM operations consist of fully connected layers. For the cell state’s parameters, , , we follow the weight sharing scheme discussed for fully connected layers.
Due the to multiplication in the cell state, we redesigned the parameter sharing for the input, output and forget gate, to be invariant to , by setting to be the empty set: no negation is needed for all dimension. This results in the following parameter sharing scheme for the parameters :
, .
This LSTM is chirality equivariant, as the computation of the cell state is equivariant. Other computations are linear combinations of chirality equivariant operations, which remains equivariant. We note that the chirality equivariant GRU module is modified by following the same sharing scheme for the gates.
b.4 Equivariant batchnorm layers
A batch normalization layer performs an elementwise standardization, followed by an elementwise affine layer (with learnable parameters and ):
Equivariance for , and is obtained by following the principle applied to fully connected layers: we achieve equivariance via parameter sharing and odd symmetry:
and .
Equivariance for , and is obtained by computing the mean and standard deviation on the “augmented batch” and by keeping track of its running average. Formally, given a batch of data,
, .
b.5 Dropout.
At test time, dropout scales the input by , where is the dropout probability. The equivariance property is satisfied because of the associativity property of a scalar multiplication. The input and output dimension and symmetry of a dropout layer are identical. Therefore, and are identical. From the definition:
Hence, a dropout layer naturally satisfies the equivariance property. At trainingtime, we do not enforce equivariance for the dropped units, , we do not jointly drop symmetric units as we found this to prevent overfitting. This is likely application dependent.
Appendix C Additional Results
c.1 3D pose estimation
In tab:supp_eva_quan, we report the HumanEvaI for multiaction models evaluated on Protocol 1 (MPJPE). Our approach have benefits the most from the Boxing action while maintaing the performance on other actions. We also provide qualitative evaluation in fig:supp_eva_walk and fig:supp_eva_box. We observe that our model successfully estimates 3D poses from 2D keypoints. We have also attached animations in the supplemental.
Walk  Jog  Box  Avg.  

App.  S1  S2  S3  S1  S2  S3  S1  S2  S3   
Pavllo Pavllo et al. (2019)  17.6  12.5  37.6  28.1  19.1  19.2  29.5  44.0  43.1  33.3 
Pavllo Pavllo et al. (2019) ()  17.5  12.3  37.4  27.7  19.0  19.0  27.7  43.4  42.5  33.0 
Ours  18.9  12.3  38.1  28.5  18.1  18.2  27.1  40.9  40.2  32.2 
c.2 Skeleton based action recognition
In fig:supp_action, we show the visualization of the input skeleton sequences computed by OpenPose Cao et al. (2018) and the predicted action class by our chiral invariant skeleton based action recognition model.
Appendix D Implementation Details
d.1 3D pose estimation
Implementation details. Our model follows the temporal convolutional architecture proposed by Pavllo et al. (2019), and replaced all layers with their chiral versions; code for the layers are attached in the supplemental as well. We also changed ReLU to tanh to achieve chiral equivariance. For the temporal models, we follow their 4 blocks design which has the receptive field of 243. For the single frame model, we follow their 3 blocks design. These models all contains 1020 hidden dimensions so it is a factor the number of joints, 17, this is slightly smaller than the 1024 used in Pavllo et al. (2019). We also use their data processing and batching stragety as described in Section 5 and Appendix A.5 of Pavllo et al. (2019). For training the model, we utilized the Adam optimizer with beta1=0.9 and beta2=0.9999. We decay the batchnormalizations’ momentum as suggested in Pavllo et al. (2019). Other details follows the publicly available implementation by Pavllo et al. (2019). We enforced chiral equivariance by choosing the to be of the hidden dimension. The for the input layer is 17 and the for the output layer is 17, as one for each joint.
d.2 2D pose forecasting
Implementation details.
The nonchiral equivariant baseline is a seq2seq model consisting of an encoder and decoder, which are stackedLSTMs with hidden size of 1040 and 2 stacked layers. We trained using teacher forcing with the Adam optimizer. The batchsize is 256, and we trained for 30 epochs. Dropout is applied to the LSTMs’ hidden layer with drop probability of 0.5. Following prior works, we use max norm gradient clipping of 5, a learning rate of 0.005 with a decay of 0.95 every 2 epochs. The data processing and evaluation setting follows
Chiu et al. (2019). Other details follows the publicly available implementation by Chiu et al. (2019). We enforced chiral equivariance by choosing the to be of the hidden dimension, as the output is two dimensional per joint.d.3 Skeletonbased action recognition
Implementation details. The nonchiral version of the model, OursConv, follows TemporalConv Kim and Reiter (2017) while we modified the model to have not only temporal convolution but also spatial convolution. There are ten spatialtemporal convolution blocks and each block we first perform spatial convolution and then temporal convolution. The temporal convolution considers the intraframe information while the spatial convolution considers the interframe information. For the recognition task, we need chiral invariance, , a chiral pair should be classified as the same action class. To this end, we use a chiral invariance layer where we let both , as well as to be empty sets, which means there are no left and right joints but only center joints and there is no dimension that will be negated in the output of the layer after applying the chirality transform. Note that the chiral transformation exchange the left and right joints and negate the dimension in the index set . Given , and are all empty, it’s obvious that the output will be chiral invariance. For the chiral invariance model, OursConvChiral, we replace the all the nonsymmetric layers before the chiral invariance layer with their corresponding chiral equivariance version. All the layers after the chiral invariance layer remains the same as in the OursConv model. Similar to Kim and Reiter (2017), there are in total 10 convolution blocks in OursConv and we put the chiral invariance layer at the fourth layer. Also, we gradually reduce the ratio of the dimension to be negated () from to at the first layer, from to at the second layer and from to at the third layer. We use the SGD optimizer with a momentum of as in Yan et al. (2018) with a batch size of 256. We train the model for 90 epochs.
Comments
There are no comments yet.