From Third Person to First Person: Dataset and Baselines for Synthesis and Retrieval

12/01/2018
by   Mohamed Elfeki, et al.
0

First-person (egocentric) and third person (exocentric) videos are drastically different in nature. The relationship between these two views have been studied in recent years, however, it has yet to be fully explored. In this work, we introduce two datasets (synthetic and natural/real) containing simultaneously recorded egocentric and exocentric videos. We also explore relating the two domains (egocentric and exocentric) in two aspects. First, we synthesize images in the egocentric domain from the exocentric domain using a conditional generative adversarial network (cGAN). We show that with enough training data, our network is capable of hallucinating how the world would look like from an egocentric perspective, given an exocentric video. Second, we address the cross-view retrieval problem across the two views. Given an egocentric query frame (or its momentary optical flow), we retrieve its corresponding exocentric frame (or optical flow) from a gallery set. We show that using synthetic data could be beneficial in retrieving real data. We show that performing domain adaptation from the synthetic domain to the natural/real domain, is helpful in tasks such as retrieval. We believe that the presented datasets and the proposed baselines offer new opportunities for further research in this direction. The code and dataset are publicly available.

READ FULL TEXT

page 3

page 4

page 7

research
04/08/2021

Learning optical flow from still images

This paper deals with the scarcity of data for training optical flow net...
research
12/12/2016

Hybrid Learning of Optical Flow and Next Frame Prediction to Boost Optical Flow in the Wild

CNN-based optical flow estimation has attracted attention recently, main...
research
04/04/2020

Optical Flow in Dense Foggy Scenes using Semi-Supervised Learning

In dense foggy scenes, existing optical flow methods are erroneous. This...
research
11/27/2017

Hierarchical Video Generation from Orthogonal Information: Optical Flow and Texture

Learning to represent and generate videos from unlabeled data is a very ...
research
01/20/2020

The benefits of synthetic data for action categorization

In this paper, we study the value of using synthetically produced videos...
research
10/15/2020

Revisiting Optical Flow Estimation in 360 Videos

Nowadays 360 video analysis has become a significant research topic in t...
research
01/23/2018

Let's Dance: Learning From Online Dance Videos

In recent years, deep neural network approaches have naturally extended ...

Please sign up or login with your details

Forgot password? Click here to reset