Few-shot Video-to-Video Synthesis

10/28/2019
by   Ting-Chun Wang, et al.
14

Video-to-video synthesis (vid2vid) aims at converting an input semantic video, such as videos of human poses or segmentation masks, to an output photorealistic video. While the state-of-the-art of vid2vid has advanced significantly, existing approaches share two major limitations. First, they are data-hungry. Numerous images of a target human subject or a scene are required for training. Second, a learned model has limited generalization capability. A pose-to-human vid2vid model can only synthesize poses of the single person in the training set. It does not generalize to other humans that are not in the training set. To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time. Our model achieves this few-shot generalization capability via a novel network weight generation module utilizing an attention mechanism. We conduct extensive experimental validations with comparisons to strong baselines using several large-scale video datasets including human-dancing videos, talking-head videos, and street-scene videos. The experimental results verify the effectiveness of the proposed framework in addressing the two limitations of existing vid2vid approaches.

READ FULL TEXT

page 7

page 8

page 9

page 13

research
07/18/2023

PixelHuman: Animatable Neural Radiance Fields from Few Images

In this paper, we propose PixelHuman, a novel human rendering model that...
research
11/30/2020

Adaptive Compact Attention For Few-shot Video-to-video Translation

This paper proposes an adaptive compact attention model for few-shot vid...
research
08/20/2018

Video-to-Video Synthesis

We study the problem of video-to-video synthesis, whose goal is to learn...
research
10/21/2022

AROS: Affordance Recognition with One-Shot Human Stances

We present AROS, a one-shot learning approach that uses an explicit repr...
research
01/13/2022

MetaDance: Few-shot Dancing Video Retargeting via Temporal-aware Meta-learning

Dancing video retargeting aims to synthesize a video that transfers the ...
research
04/24/2019

A General Framework for Edited Video and Raw Video Summarization

In this paper, we build a general summarization framework for both of ed...
research
06/30/2023

DisCo: Disentangled Control for Referring Human Dance Generation in Real World

Generative AI has made significant strides in computer vision, particula...

Please sign up or login with your details

Forgot password? Click here to reset