Two-Stream Convolutional Networks for Dynamic Texture Synthesis

06/21/2017
by   Matthew Tesfaldet, et al.
0

We introduce a two-stream model for dynamic texture synthesis. Our model is based on pre-trained convolutional networks (ConvNets) that target two independent tasks: (i) object recognition, and (ii) optical flow prediction. Given an input dynamic texture, statistics of filter responses from the object recognition ConvNet encapsulate the per-frame appearance of the input texture, while statistics of filter responses from the optical flow ConvNet model its dynamics. To generate a novel texture, a noise input sequence is optimized to simultaneously match the feature statistics from each stream of an example texture. Inspired by recent work on image style transfer and enabled by the two-stream model, we also apply the synthesis approach to combine the texture appearance from one texture with the dynamics of another to generate entirely novel dynamic textures. We show that our approach generates novel, high quality samples that match both the framewise appearance and temporal evolution of input texture imagery. Finally, we quantitatively evaluate our approach with a thorough user study.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset