Modeling Cross-view Interaction Consistency for Paired Egocentric Interaction Recognition

03/24/2020
by   Zhongguo Li, et al.
6

With the development of Augmented Reality (AR), egocentric action recognition (EAR) plays important role in accurately understanding demands from the user. However, EAR is designed to help recognize human-machine interaction in single egocentric view, thus difficult to capture interactions between two face-to-face AR users. Paired egocentric interaction recognition (PEIR) is the task to collaboratively recognize the interactions between two persons with the videos in their corresponding views. Unfortunately, existing PEIR methods always directly use linear decision function to fuse the features extracted from two corresponding egocentric videos, which ignore consistency of interaction in paired egocentric videos. The consistency of interactions in paired videos, and features extracted from them are correlated to each other. On top of that, we propose to build the relevance between two views using biliear pooling, which capture the consistency of two views in feature-level. Specifically, each neuron in the feature maps from one view connects to the neurons from another view, which guarantee the compact consistency between two views. Then all possible paired neurons are used for PEIR for the inside consistent information of them. To be efficient, we use compact bilinear pooling with Count Sketch to avoid directly computing outer product in bilinear. Experimental results on dataset PEV shows the superiority of the proposed methods on the task PEIR.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

research
05/25/2023

Cross-view Action Recognition Understanding From Exocentric to Egocentric Perspective

Understanding action recognition in egocentric videos has emerged as a v...
research
05/12/2014

Cross-view Action Modeling, Learning and Recognition

Existing methods on video-based action recognition are generally view-de...
research
03/21/2021

Multi-view analysis of unregistered medical images using cross-view transformers

Multi-view medical image analysis often depends on the combination of in...
research
12/23/2019

GrabAR: Occlusion-aware Grabbing Virtual Objects in AR

Existing augmented reality (AR) applications often ignore occlusion betw...
research
06/20/2017

Compact Tensor Pooling for Visual Question Answering

Performing high level cognitive tasks requires the integration of featur...
research
07/16/2020

Learning End-to-End Action Interaction by Paired-Embedding Data Augmentation

In recognition-based action interaction, robots' responses to human acti...
research
02/08/2021

Observers Pupillary Responses in Recognising Real and Posed Smiles: A Preliminary Study

Pupillary responses (PR) change differently for different types of stimu...

Please sign up or login with your details

Forgot password? Click here to reset