Modeling Cross-view Interaction Consistency for Paired Egocentric Interaction Recognition

03/24/2020 ∙ by Zhongguo Li, et al. ∙ University of South Carolina IEEE Tianjin University 6

With the development of Augmented Reality (AR), egocentric action recognition (EAR) plays important role in accurately understanding demands from the user. However, EAR is designed to help recognize human-machine interaction in single egocentric view, thus difficult to capture interactions between two face-to-face AR users. Paired egocentric interaction recognition (PEIR) is the task to collaboratively recognize the interactions between two persons with the videos in their corresponding views. Unfortunately, existing PEIR methods always directly use linear decision function to fuse the features extracted from two corresponding egocentric videos, which ignore consistency of interaction in paired egocentric videos. The consistency of interactions in paired videos, and features extracted from them are correlated to each other. On top of that, we propose to build the relevance between two views using biliear pooling, which capture the consistency of two views in feature-level. Specifically, each neuron in the feature maps from one view connects to the neurons from another view, which guarantee the compact consistency between two views. Then all possible paired neurons are used for PEIR for the inside consistent information of them. To be efficient, we use compact bilinear pooling with Count Sketch to avoid directly computing outer product in bilinear. Experimental results on dataset PEV shows the superiority of the proposed methods on the task PEIR.



There are no comments yet.


page 1

page 2

page 3

page 4

page 5

page 6

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Due to the advance of Augmented Reality (AR) techniques, wearable AR devices like Microsoft HoloLens allow users to interact with the real world via gestures or voice commands. Egocentric action recognition (EAR) [li2019deep, Choutas2018PoTion, Sudhakaran2018LSTA, ryoo2015pooled] is the task to recognize the action or gesture of users to achieve intelligent human-machine interaction. In this paper, we study the further problem of Paired Egocentric Interaction Recognition (PEIR) that recognizes the interactions between face-to-face AR users [syahputra2018interaction, karambakhsh2019deep, pan2018and]. Different from EAR that only considers one egocentric video, PEIR needs to simultaneously consider the paired face-to-face egocentric videos. Utilizing paired egocentric videos recorded from face-to-face views can obtain more precise recognitions than egocentric videos from single views [yonetani2016recognizing]. Enabling AR systems to understand the interactions between persons can provide more precise assistance in daily life. For examples, while one user point at one object and the other one shares his/her attention, AR system could read the interaction and response without explicit commands.

Figure 1: An illustration of the consistency of interactions between a paired of egocentric views. Here the right hand of person A, labeled by red boxes, are recorded in both the views of A and B.

Previous works make many efforts on EAR [li2019deep, Choutas2018PoTion, Sudhakaran2018LSTA, ryoo2015pooled]. Ryoo et al. [ryoo2015pooled] tries to capture both entire scene dynamics and salient local motions observed in videos to predict interactions. Li et al. [li2019deep] tries to model the relationship of the camera wearer and the interactor. From EAR methods, a naive solution to PEIR is directly fusing the features from two views for the sake of one head output. For instance, one can concatenate the features extracted from paired egocentric videos and directly use the generated feature for linear classification. Till now, few works try to solve the PEIR problem. In [yonetani2016recognizing], Yonetani et al. adopt a linear decision function to combine two kinds of handcrafted feature for PEIR.

In our view, the naive feature fusion from two views ignores the interaction consistency in two videos taken from two persons’ views respectively. As shown in Fig. 1, we consider two persons of A and B and their paired views are shown as A and B respectively. On the one hand, the head’s tilting of A could be recorded in the view of B, and also leads to the shift of A’s viewpoints, because cameras are mounted over their heads. On the other hand, there may be interactions occurred in common areas recorded in both views of A and B. In the both cases, there are explicit or implicit information consistency of interactions between the views of A and B.

Due to the consistency of interactions in two views, neurons of feature maps corresponding to two views should describe the same information. We propose to classify interactions based on the consistent information represented by all possible paired neurons. Specifically, we use bilinear pooling, a second order polynomial kernel function 

[gao2016compact], to capture the compact consistent information of interaction in all pairs. To avoid directly computing expensive outer product, we propose to use compact bilinear pooling to reduce the computation cost by using Count Sketch. We first extract features from paired egocentric videos, followed by obtaining sketch mentioned above and transforming them into Fourier domain. Finally we compute element-wise product of them and transform the result into real domain for linear classification. Experimental results on the PEV dataset shows that the proposed method outperforms other methods using naive fusions.

2 Related Works

Paired egocentric interaction recognition (PEIR)

, extended from egocentric action recognition (EAR), aims at recognizing the interaction between two face-to-face persons from both of their views. Along with the successes of deep learning in image-level tasks  

[ISI:000457843607093, ISI:000476809700007]

, in recent years, EAR makes a great progress by using deep neural networks. For example Li et al. 

[li2019deep] models the relationship of the camera wearer and the interactor. However, EAR assumes there exists only one person with wearable camera. PEIR was first proposed by Yonetani et al. [yonetani2016recognizing], where interactions like subtle motion of head or small hand actions people used in communications are recognized, by using their paired face-to-face views. The video from each view is collected by the camera over the head. To recognize interaction, Yonetain et al. [yonetani2016recognizing] combines two kinds of hand-crafted features – PoTCD [poleg2014temporal] for head and iDT [wang2013action] for body – for two views respectively with linear decision models. However, it ignores the consistency of interactions in two views and prefers to rely on only one view for prediction. Different from naive fusion like concatenation, we propose to leverage bilinear pooling to model the consistency between the paired egocentric videos.

Bilinear pooling models [lin2015bilinear] was proposed for fine-grained image classification. Bilinear methods have been used to fuse two kind of features extracted by deep neural networks. Given two features , bilinear methods compute the outer product of them by



means outer product of tensors.

operator vectorizes a matrix into a vector which means elements in matrix is sorted by orders in new vector. Then the features generated by bilinear methods are directly used for classification. However when

is large, the generated features are of dimension . High-dimensional features make the direct computing of the outer product for linear classification very expensive. Gao et al. [gao2016compact] reduces the dimension of generated feature from into , which is far less than . Following this idea, Fukui et al. [fukui2016multimodal] fuse the visual features and the text features for visual question answering and visual grounding.

3 Method

3.1 Problem formulation

Denote the person who starts the interaction as A, the one who receives the interaction as B, as shown in Fig. 1 and their recorded videos (taken by the wearable cameras mounted over their heads) as and , respectively. PEIR tries to learn to recognize the interaction from such paired egocentric videos based on the hypothesis .

The naive method to tackle PEIR is directly fusing the information from two views, such as [yonetani2016recognizing], and this method can be formulated as



are the flattened feature maps extracted by Convolutional Neural Network (CNN) and

is the fused feature of and ,

is the softmax activation function. The key step is the design of the fusing function

to effectively combine the information from two views. Common choices of the fusing function include

1) concatenation: ,

2) element-wise summation: , and

3) element-wise product: .

These three common fusing methods are easy to implement but do not consider the possible interaction consistency between two views. In this following, we propose a new method to address this problem.

3.2 Main Method

Figure 2: Framework of the proposed method. Given paired egocentric videos from views A and B, we first extract features from both videos, then calculate the sketches of them, finally we obtain the sketch of outer product and transform it into Fourier domain for classification. DFFT

is the discrete fast Fourier transform.

The consistency of interactions exist in paired egocentric videos as the common visual information of co-viewed objects, i.e., and are correlated. We first enumerate all pairs between and by


where represents the consistent information in the form of pairs and represents the -th pair between and . is the -th element in and is element-wise form of . By Eq. (6), we construct element-wise information correlation between view A and view B, which yields fine-grained interaction consistency representation . In Eq. (6), if the product operator is adopted for , is actually the outer product of and , and becomes the bilinear method as described in Eq. (1). The procedure of the compact bilinear method is shown in Alg. 1 in the appendix. To reduce the computation cost, we use the compact bilinear method [pham2013fast, gao2016compact, fukui2016multimodal] to shrink the dimension of from to , where , and the full framework is shown in Fig. 2. Gao et al. [gao2016compact] proves that bilinear method is actually a second order polynomial kernel function and uses Count Sketch (called sketch in the remainder of this paper) [gao2016compact, pagh2013compressed] to reduce the computation cost.

Detailedly, we first calculate the sketch of feature and . We define as following: each element in is universal randomly sampled from and every element in is universal randomly sampled from . We denote as a matrix generated according to : each element is defined as


For the feature , we first compute element-wise product of and , similar to traditional Random Maclaurin [kar2012random] for approximating the polynomial kernel. Then we compute matrix product of the above element-wise product and matrix to get sketch of feature , which projects from into

to provide bounds on the variance of estimates to guarantee the reliability



After that, we transform sketches into Fourier domain using discrete fast Fourier transform (DFFT) and obtain element-wise product result of two sketches in Fourier domain. It has been proved [pham2013fast] that the sketch of outer product in Fourier domain is just the element-wise product of ’s sketch and ’s sketch both in Fourier domain, and the computation of outer product could be replaced by computing element-wise product in Fourier domain. Finally we obtain in the original domain by transforming the result in Fourier domain back using inverse DFFT (), i.e.,


3.3 Discussion

Although the concatenation method and element-wise operations can also be written in the form of Eq. (2), they cannot describe the consistency of interaction in two views. The reasons are discussed as follows.

Concatenation. For concatenation methods, the fusing function in Eq. (4) is to concatenate and

, and the logits of the PEIR prediction can be written as


Since the first elements of generated by the concatenation method are all from feature and the rest are all from

, we could rewrite the predicted probability of the model defined in Eq. (

5) as Eq. (12). Obviously, in Eq. (12), the feature is independent of for final prediction. Thus concatenation method does not consider the correlation between and .

Element-wise product. Element-wise product is similar to Eq. (6), and the predicted probability could be written as


where only feature elements with the same index is correlated.

Element-wise summation. If we adopt element-wise summation as , we could see


Similar to the concatenation-based fusion, in this case features and are independent of each other for final prediction.

Ours. We adopt bilinear pooling operation as , which means the product operator is selected as in Eq. ((6)), i.e.,


where the logits of PEIR are predicted based on the paired elements between and . Each element in is correlated with all the elements in for final prediction.

Modality View Method Split1 Split2 Split3 Avg Acc
RGB A - 78.50 77.94 76.19 77.54
RGB B - 73.52 68.73 71.75 71.33
RGB A+B Avg - 80.50 80.55 76.80 79.28
RGB A+B Svm - 84.96 78.70 80.40 81.35
RGB A+B Concat 2048 82.25 80.89 79.30 80.81
RGB A+B Ours 2048 84.35 83.09 83.47 83.63
OF A - 84.96 81.35 84.24 83.51
OF B - 79.74 78.00 78.46 78.73
OF A+B Avg - 86.01 82.62 85.86 84.83
OF A+B Svm - 89.40 83.42 87.97 86.93
OF A+B Concat 2048 87.37 87.56 88.21 87.71
OF A+B Ours 2048 88.10 89.06 89.12 88.76
RGB A+B Sum 1024 82.99 79.14 79.09 80.41
RGB A+B Product 1024 84.35 79.10 72.81 78.75
RGB A+B Ours 1024 86.08 81.34 83.78 83.73
OF A+B Sum 1024 87.24 87.26 87.64 87.78
OF A+B Product 1024 87.30 88.44 88.34 88.03
OF A+B Ours 1024 89.33 88.81 88.37 88.84
Table 1: Accuracy in validation set with RGB or OpticalFlow.

4 Experiments

Modality View Method Avg Acc
PoTCD+IDT A+B Yonetani et al.[yonetani2016recognizing] - 69.2
TwoStream A+B Svm - 87.27
TwoStream A+B Concat 2048 87.72
TwoStream A+B Ours 2048 88.91
TwoStream A+B Product 1024 85.17
TwoStream A+B Sum 1024 84.41
TwoStream A+B Ours 1024 88.45
Table 2: Mean accuracy of TwoStream nets.
Figure 3: Confusion matrix of our method, averaged over the three validation sets.
Figure 4: Accuracy for each category while input is RGB (left) or OpticalFlow (right). ‘A+B’ means both views are used.
Figure 5: Top 10 percent integrated gradients of our model in validation sets. The yellow dots show the model’s focuses.

4.1 Experiments details

Dataset and evaluation metric.

PEV dataset is proposed in [yonetani2016recognizing], which contains 1,226 pairs of videos in 7 categories. Each paired video is collected by the cameras mounted to the heads of two persons standing face to face. The 7 categories of interactions present in PEV dataset[yonetani2016recognizing] are: , , , , , , . We adopt the three-fold cross validation for evaluation. Results in all three validation splits are reported in Table  1.

Model settings. We adopt I3D [carreira2017quo] as the backbone, which is pretrained on Kinetic-400. , the dimension of and , is set to 1,024 as used in backbone. For fair comparison , , the dimension of

, is set to 2,048 as in the concatenation method, or 1,024 as in the element-wise product or summation methods. Cross entropy loss is chosen as the loss function. Standard SGD is used to train neural networks in this paper, and the learning rate is set to 0.1.

Normalization. and

shall be normalized to be in the form of one-hot encoding, since they are batch normalized 


in the backbone and fed into ReLU 

[glorot2010deep] layer. We only scale them by a constant factor.

Data Processing. We sample one frame for every three frames. Sampled frames are randomly cropped and scaled to . Horizontal flipping is then used for data augmentation. The Length of clip is set to 32, and the last frame will be repeated if clip’s length is less than 32.

4.2 Results

We explore different fusion methods under RGB or OpticalFlow modality. We choose feature-fusion methods including concatenation (Concat), element-wise product (Product) and element summation (Sum) as baselines. We also report results of the methods using average (Avg) or SVM for late-fusion, i.e., training classifier for each view first and then fusing scores as the final result. Results are shown in Table 1.

Comparisons. All the methods using two views together outperform the models that only use one view. We can see that, SVM is always a better choice for fusing decision scores. Element-wise product achieves a score higher than element-summation in OpticalFlow modality, but lower in RGB modality. While under RGB modality, the end-to-end concatenation method does not outperform the score fusion method, e.g., SVM, the result is opposite when the input becomes OpticalFlow. Under OpticalFlow modality, element-wise product is surprisingly better than concatenation or element-wise summation, with half parameters in the last fully connected layer compared with the concatenation-based method. Our method performs best in both RGB and OpticalFlow modalities – it outperforms the second best method by 2.28% in RGB modality, and by 1.05% in OpticalFlow modality, with set as 2,048. When is set to 1,024, our proposed method outperforms the second best by 3.32% in RGB modality, and by only 0.81% (over element-wise product) in OpticalFlow modality. A possible explanation is that only motion information is recorded in OpticalFlow modality and the consistency involving appearance cannot be built in OpticalFlow modality. When following the two-stream net [simonyan2014two] of using both RGB and OpticalFlow modalities as input, our method still achieves highest score than other fusing methods, as shown in Table. 2.

Confusion matrix and category-grained accuracy. Confusion matrix of our method is shown in Fig. 3 and Fig. 4. The proposed method performs best or pretty close to the best. Other methods is unstable compared with ours. We believe this is because our proposed method makes final prediction only if evidence is found in both paired egocentric videos. For RGB modality, action is the most difficult to be recognized, where the score-fusion SVM method performs the best, followed by our method as the second. For OpticalFlow modality, the most difficult action is and the proposed method achieves the highest score.

4.3 Visualization

We use Integrated Gradients [sundararajan2017axiomatic] to visualize the model, as shown in Fig. 5. The yellow dots in frames represent the focus of the model. In category, person A’s hand is focused, and other dots in the edges of frame represent the model focus on these pixels to catch shift of attentions, and similar results can be seen in category. The visualization for and actions shows that the model successfully focuses on the hand and object. For and actions, the focus is changed into A and B’s upper body. The interesting result occurs for action, the model focuses on B’s head, possibly because B responds to A with a subtle shaking of head.

5 Conclusion

In this paper, we proposed to build the relevance between paired egocentric videos for interaction recognition. We observed that the consistency of interactions exists in paired videos and features extracted from them are correlated to each other. We proposed to use bilinear pooling to capture all possible consistent information represented in pairs of elements between the features from two views. Moreover we used the compact bilinear pooling with Count Sketch to reduce the algorithm computation complexity. Experiments showed that our method achieves the state-of-the-art performance on the PEV dataset.