LucidDream: Controlled Temporally-Consistent DeepDream on Videos

11/27/2019
by   Joel Ruben Antony Moniz, et al.
13

In this work, we aim to propose a set of techniques to improve the controllability and aesthetic appeal when DeepDream, which uses a pre-trained neural network to modify images by hallucinating objects into them, is applied to videos. In particular, we demonstrate a simple modification that improves control over the class of object that DeepDream is induced to hallucinate. We also show that the flickering artifacts which frequently appear when DeepDream is applied on videos can be mitigated by the use of an additional temporal consistency loss term.

READ FULL TEXT

page 2

page 4

page 6

research
03/12/2021

Learning Long-Term Style-Preserving Blind Video Temporal Consistency

When trying to independently apply image-trained algorithms to successiv...
research
06/13/2019

Unsupervised Video Interpolation Using Cycle Consistency

Learning to synthesize high frame rate videos via interpolation requires...
research
10/17/2021

Temporally stable video segmentation without video annotations

Temporally consistent dense video annotations are scarce and hard to col...
research
11/27/2020

Deinterlacing Network for Early Interlaced Videos

With the rapid development of image restoration techniques, high-definit...
research
08/10/2022

A Detection Method of Temporally Operated Videos Using Robust Hashing

SNS providers are known to carry out the recompression and resizing of u...
research
06/24/2022

Text-Driven Stylization of Video Objects

We tackle the task of stylizing video objects in an intuitive and semant...
research
06/11/2022

An Evaluation of OCR on Egocentric Data

In this paper, we evaluate state-of-the-art OCR methods on Egocentric da...

Please sign up or login with your details

Forgot password? Click here to reset