Deep Space-Time Video Upsampling Networks

04/06/2020 ∙ by Jaeyeon Kang, et al. ∙ 2

Video super-resolution (VSR) and frame interpolation (FI) are traditional computer vision problems, and the performance have been improving by incorporating deep learning recently. In this paper, we investigate the problem of jointly upsampling videos both in space and time, which is becoming more important with advances in display systems. One solution for this is to run VSR and FI, one by one, independently. This is highly inefficient as heavy deep neural networks (DNN) are involved in each solution. To this end, we propose an end-to-end DNN framework for the space-time video upsampling by efficiently merging VSR and FI into a joint framework. In our framework, a novel weighting scheme is proposed to fuse input frames effectively without explicit motion compensation for efficient processing of videos. The results show better results both quantitatively and qualitatively, while reducing the computation time (x7 faster) and the number of parameters (30

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 14

page 16

page 18

page 23

page 25

Code Repositories

STVUN-Pytorch

Deep Space-Time Video Upsampling Networks


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.