Adaptive Compact Attention For Few-shot Video-to-video Translation

11/30/2020
by   Risheng Huang, et al.
3

This paper proposes an adaptive compact attention model for few-shot video-to-video translation. Existing works in this domain only use features from pixel-wise attention without considering the correlations among multiple reference images, which leads to heavy computation but limited performance. Therefore, we introduce a novel adaptive compact attention mechanism to efficiently extract contextual features jointly from multiple reference images, of which encoded view-dependent and motion-dependent information can significantly benefit the synthesis of realistic videos. Our core idea is to extract compact basis sets from all the reference images as higher-level representations. To further improve the reliability, in the inference phase, we also propose a novel method based on the Delaunay Triangulation algorithm to automatically select the resourceful references according to the input label. We extensively evaluate our method on a large-scale talking-head video dataset and a human dancing dataset; the experimental results show the superior performance of our method for producing photorealistic and temporally consistent videos, and considerable improvements over the state-of-the-art method.

READ FULL TEXT

page 2

page 4

page 7

page 8

research
10/28/2019

Few-shot Video-to-Video Synthesis

Video-to-video synthesis (vid2vid) aims at converting an input semantic ...
research
08/31/2023

GHuNeRF: Generalizable Human NeRF from a Monocular Video

In this paper, we tackle the challenging task of learning a generalizabl...
research
07/31/2023

Towards Unbalanced Motion: Part-Decoupling Network for Video Portrait Segmentation

Video portrait segmentation (VPS), aiming at segmenting prominent foregr...
research
04/26/2022

Stochastic Coherence Over Attention Trajectory For Continuous Learning In Video Streams

Devising intelligent agents able to live in an environment and learn by ...
research
04/16/2019

What I See Is What You See: Joint Attention Learning for First and Third Person Video Co-analysis

In recent years, more and more videos are captured from the first-person...
research
01/14/2020

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation

Synthesizing realistic videos of humans using neural networks has been a...
research
02/27/2020

BBAND Index: A No-Reference Banding Artifact Predictor

Banding artifact, or false contouring, is a common video compression imp...

Please sign up or login with your details

Forgot password? Click here to reset