AR/VR technology is finally starting to flourish with the advent of consumer head-mounted devices like Oculus, Ray-Ban Stories, HoloLens, etc. These devices have the potential to fundamentally revolutionize our daily lives and our society – in a manner similar to what smartphones did in the previous decades. To enable such AR/VR applications, one of the fundamental challenges that requires to be solved is egocentric action recognition – machine understanding of users’ activities from head-mounted cameras.
With the progress of modern computer vision technologies, the now-familiar approach for action recognition is to train convolutional neural networks (CNNs) in a supervised manner using millions of video clips that are manually categorized into egocentric actions. This approach, however, has at least two limitations. First, annotating large enough video clips to train CNNs is very expensive. Second, even if we had unlimited budgets, we would not be able to cover all of the actions that humans could do.
One of the promising ways to address these limitations is to train CNNs using self-supervised learning (SSL) [arandjelovic1712objects, chen2020simple], which has been making rapid progress these days. Instead of relying on human annotations, SSL utilizes intrinsic properties existing in the data (e.g., invariance over data-augmentation [chen2020simple], multi-modality of the data [arandjelovic1712objects], etc.) for training representations for various downstream tasks including recognition. Inspired by these, we are particularly interested in using head-motion data as a self-supervision signal for egocentric action recognition. Our intuition to leverage head-motion for SSL of egocentric video representations is based on two main factors. First, head-motion is inherently related to users’ activities. For instance, head and gaze usually precede picking up/putting down actions; similarly head-motion can also give away movements and change of focus (see Figure 2). Secondly, head-motion data can be easily accessible in AR/VR application scenarios via affordable, on-board IMU sensors on egocentric devices.
To harness the potential of head-motion data for SSL of egocentric video representations, a few fundamental questions need to be answered – Does head-motion data have unique information that is not captured by the egocentric video representation? If so, what is an effective way to utilize the useful signals in head-motion to benefit egocentric video representation learning? Finally, do the learned representations work better than those trained with SSL on video only data? In this work, we systematically answer these research questions. We empirically show that head-motion can provide additional advantages over video even for fully supervised learning. We then design a simple but effective SSL approach to learn egocentric video representations by classifying pairs of videos and head-motion data based on their correspondence (Figure 1). We train our model on the EPIC-KITCHENS dataset using this approach and show the effectiveness of resultant representation for the downstream task of classifying actions in kitchen tasks. Furthermore, we also leverage the same representation to recognize dog-centric activities induced by dogs’ head motion, demonstrating that our learned representation generalizes beyond the training domain.
SSL task formulation
Inspired by the limitations of labeled datasets, we aim to learn egocentric video representations using SSL for AR/VR applications. In particular, we want to leverage the multimodal data available in AR/VR – egocentric video and head-motion captured by a head-mounted camera with IMU sensors. SSL usually utilizes a proxy task to train representations without human annotations. For instance, we can learn image representations with a contrastive loss by maximizing the agreement between two different augmented views of the same image [chen2020simple]. That is, given a pair of randomly augmented images, their representations are encouraged to be similar if they are from the same image, and not if from different images. An extension of this for a multimodal case is to train on the correspondence between two modalities such as audio and video [arandjelovic1712objects]. Inspired by this audio-visual SSL framework, we propose a binary classification task to match the correspondence between egocentric video and IMU signals of head-motion captured by a head-mounted camera for learning egocentric video representations in AR/VR.
To train representations using the above SSL task, we randomly sample a batch of short (2 seconds in our experiments) video clips synchronized with head-motion signals captured by head-mounted IMU sensors. We then extract the feature vectors of video and IMU, compute the pairwise similarities, and encourage the similarities to be high only if they are from the same clip (Figure1). Specifically, given – a batch with
pairs of video and head-motion feature vectors from CNNs, we minimize the following contrastive loss function.
L = 1N2 ∑_i=1^N ∑_j=1^N (
exp(sim(vi, mj))∑k=1Nexp(sim(vi, mk))
+ exp(sim(vi, mj))∑k=1Nexp(sim(vk, mj)) ), where
is the cosine similarity.
After the SSL training, we can use the video representation (and also head-motion representation if needed) for downstream tasks such as action recognition.
3.1 Dataset and backbone details
We use the EPIC-KITCHENS [Damen2021RESCALING] dataset for all of experiments except the last one. For the final experiment, we use the DogCentric Activity dataset [iwashita2014first] in order to show generalization of our approach beyond the training dataset. For the EPIC-KITCHENS dataset, we select the video clips accompanied by the corresponding IMU signals of head (camera) motion, and made our own data split of train:validation:test = 30044:3032:4379 based on video ids with no overlapping subjects among the splits. This split has 65 unique test verbs, which means the random guess baseline can achieve the accuracy of 1.5%. However, due to the biased distribution of actions, the major action (take) dominates the 27% of testing set. For experiments using the DogCentric Activity dataset, we choose the activity categories related to actions with head-motion: Walk, Shake, Look at left, and Look at right. These four actions are almost balanced and the majority class of Walk dominates the 30% of the dataset. This dataset is small (total 216 video clips and 86 after our selection), so we split into the half-and-half based on dog ids and perform 2-fold cross validation and report the mean accuracy.
In order to train representations with SSL loss described in eq. 2, we use SlowFast50 as the backbone CNN representation for video and VGG16 for representing the head-motion IMU signals. The spatiotemporal input size of the video CNN is , corresponding to the width, height, and frame size (with the frame rate of 24fps), respectively. A raw IMU clip is represented with a matrix shaped with , corresponding to time (with the frequency of 198Hz) and channels (XYZs for accelerometers and gyroscopes), respectively. Our ways of handling IMU signals are based on [laput2019sensing], where ordinary image classification CNNs can be used after extracting spectrograms with .
|Correctly Classified by|
3.2 Evaluating the utility of head-motion signals
Our goal is to utilize head-motion to learn better egocentric video representation for action recognition. However, since video is a rich modality with high fidelity of information, is there any room left for the head motion signals to improve the video representation for action recognition? To answer this question, we perform two preliminary experiments.
The first experiment is to train an action classifier from head-motion signals and compare with a video only classifier. We expect that the video-based classifier will achieve the higher action classification accuracy. However, if some classes can be correctly classified only by head-motion signals, then this would imply that head-motion indeed has an advantage over video at least for certain classes. We show the classification results of the top five frequent actions (verbs) in Table 1 and 2. The classifier from video has higher accuracy on average, which is what we expected. However, some action clips are correctly classified only from head-motion (Table 2
). Moreover, we also add a simple ensemble model by averaging the probability vectors (outputs after the softmax function) of the two classifiers, and confirm the improvement on the overall accuracy (Table1). These results demonstrate an advantage of the head-motion signals over the video.
|Video CNN Pretrained on|
|Freeze Video CNN||87.33||86.44|
|Update Video CNN||92.8||94.73|
The second experiment is to see if existing video representations (CNN features pretrained on Kinetics) already capture head-motion information or not. This question is important because, if video representations pretrained without head-motion already contain all the information that can be extracted from head-motion, we cannot add any additional value into the video representation by using head-motion. To answer this question, we initialize the video CNN using pretrained weights from Kinetics or EPIC-KITCHENS, and compare the accuracy of our SSL task of matching the correspondence between video and head-motion in two different settings. In the first setting, we train our model (Figure 1) with frozen pretreind video CNN and only update the head-motion CNN weights. In the second setting, we update both the video and head-motion CNN weights. We compare the resulting ROC-AUC accuracies of SSL correspondence classification task for both the settings – without and with the update of the video CNN weights (Table 3). We see an increased performance for both CNNs pretrained on Kinetics and EPIC-KITCHENS. Our interpretation is that updating the video CNN weights will not provide any gain on the accuracy if the head-motion information is already embedded in the pretrained video representation. The increased performance indicates that we still have room to improve the video representation by utilizing head-motion. Note that we use ROC-AUC instead of the plain accuracy because most of the pairs are negative correspondence (always classifying as negative achieves the high plain accuracy).
|Fully-Supervised on Kinetics||36.58|
|Self-Supervised on EPIC-KITCHENS||41.94 (Ours)|
|Fully-Supervised on EPIC-KITCHENS||55.61|
3.3 Leveraging head-motion for action recognition
3.3.1 EPIC-KITCHEN action classification using SSL pretrained representation
), we can leverage the learned video CNN as a generic video representation backbone for downstream tasks such as egocentric action classification. To test the effectiveness of our video representations learned using SSL, we trained a linear classifier of multiclass logistic regression (or softmax regression) on top of the learned video representation. We also train the same linear classifier on top of the representations learned using fully supervised training for action classification of Kinetics and EPIC-KITCHENS and compare the results (Table4). The classifier that uses the representation learned using SSL achieves the accuracy of 41.94%. This is higher than the accuracy (27.01%) of Kinetics representation, and lower than the fully supervised counterpart (55.61%) of EPIC-KITCHENS. While our SSL representation pre-training is thus effective, it is still behind the fully-supervised counterpart. We therefore wish to close this gap in our future work.
|Fully-Supervised on Kinetics||46.98|
|Self-Supervised on EPIC-KITCHENS||54.21 (Ours)|
3.3.2 Generalization to DogCentric actions
We also want to see if the representation learned using our SSL task (eq. 2) generalizes beyond the training domain of kitchens. To test this, we train a linear classifier (multiclass logistic/softmax regression) on DogCentric Activity Dataset using our pretrained SSL representation from EPIC-KICTHEN. We show the results in Table 5. While the classifier based on Kinetics representation has the accuracy of 46.98%, our SSL representation achieves 54.21%. This indicates the effectiveness of our SSL approach beyond the training domain.
We explored a self-supervised learning (SSL) of video representation by leveraging multimodal egocentric video streams and head-motion captured by IMU sensors for AR/VR applications. While video has much richer information, there’s still room to improve the video representation using head-motion information. Our SSL task, which is tell a simple video-motion correspondence, can train representations effective for egocentric action recognition.