GlobalFlowNet: Video Stabilization using Deep Distilled Global Motion Estimates

10/25/2022
by   Jerin Geo James, et al.
0

Videos shot by laymen using hand-held cameras contain undesirable shaky motion. Estimating the global motion between successive frames, in a manner not influenced by moving objects, is central to many video stabilization techniques, but poses significant challenges. A large body of work uses 2D affine transformations or homography for the global motion. However, in this work, we introduce a more general representation scheme, which adapts any existing optical flow network to ignore the moving objects and obtain a spatially smooth approximation of the global motion between video frames. We achieve this by a knowledge distillation approach, where we first introduce a low pass filter module into the optical flow network to constrain the predicted optical flow to be spatially smooth. This becomes our student network, named as \textsc{GlobalFlowNet}. Then, using the original optical flow network as the teacher network, we train the student network using a robust loss function. Given a trained \textsc{GlobalFlowNet}, we stabilize videos using a two stage process. In the first stage, we correct the instability in affine parameters using a quadratic programming approach constrained by a user-specified cropping limit to control loss of field of view. In the second stage, we stabilize the video further by smoothing global motion parameters, expressed using a small number of discrete cosine transform coefficients. In extensive experiments on a variety of different videos, our technique outperforms state of the art techniques in terms of subjective quality and different quantitative measures of video stability. The source code is publicly available at \href{https://github.com/GlobalFlowNet/GlobalFlowNet}{https://github.com/GlobalFlowNet/GlobalFlowNet}

READ FULL TEXT
research
11/20/2021

FAMINet: Learning Real-time Semi-supervised Video Object Segmentation with Steepest Optimized Optical Flow

Semi-supervised video object segmentation (VOS) aims to segment a few mo...
research
05/27/2022

FlowNet-PET: Unsupervised Learning to Perform Respiratory Motion Correction in PET Imaging

To correct for breathing motion in PET imaging, an interpretable and uns...
research
08/07/2022

Learning Omnidirectional Flow in 360-degree Video via Siamese Representation

Optical flow estimation in omnidirectional videos faces two significant ...
research
07/10/2020

Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer

Video style transfer techniques inspire many exciting applications on mo...
research
08/11/2021

Multi-Source Fusion and Automatic Predictor Selection for Zero-Shot Video Object Segmentation

Location and appearance are the key cues for video object segmentation. ...
research
05/20/2022

Unsupervised Flow-Aligned Sequence-to-Sequence Learning for Video Restoration

How to properly model the inter-frame relation within the video sequence...
research
11/10/2021

Self-Supervised Real-time Video Stabilization

Videos are a popular media form, where online video streaming has recent...

Please sign up or login with your details

Forgot password? Click here to reset