Automatic Portrait Video Matting via Context Motion Network

09/10/2021
by   Qiqi Hou, et al.
0

Automatic portrait video matting is an under-constrained problem. Most state-of-the-art methods only exploit the semantic information and process each frame individually. Their performance is compromised due to the lack of temporal information between the frames. To solve this problem, we propose the context motion network to leverage semantic information and motion information. To capture the motion information, we estimate the optical flow and design a context-motion updating operator to integrate features between frames recurrently. Our experiments show that our network outperforms state-of-the-art matting methods significantly on the Video240K SD dataset.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
11/28/2019

Every Frame Counts: Joint Learning of Video Segmentation and Optical Flow

A major challenge for video semantic segmentation is the lack of labeled...
research
03/01/2017

Optical Flow-based 3D Human Motion Estimation from Monocular Video

We present a generative method to estimate 3D human motion and body shap...
research
04/13/2018

MSnet: Mutual Suppression Network for Disentangled Video Representations

The extraction of meaningful features from videos is important as they c...
research
09/08/2022

Unsupervised Video Object Segmentation via Prototype Memory Network

Unsupervised video object segmentation aims to segment a target object i...
research
03/08/2023

TSANET: Temporal and Scale Alignment for Unsupervised Video Object Segmentation

Unsupervised Video Object Segmentation (UVOS) refers to the challenging ...
research
07/31/2019

Auto-labelling of Markers in Optical Motion Capture by Permutation Learning

Optical marker-based motion capture is a vital tool in applications such...

Please sign up or login with your details

Forgot password? Click here to reset