Learning the Synthesizability of Dynamic Texture Samples

02/03/2018
by   Feng Yang, et al.
0

A dynamic texture (DT) refers to a sequence of images that exhibit temporal regularities and has many applications in computer vision and graphics. Given an exemplar of dynamic texture, it is a dynamic but challenging task to generate new samples with high quality that are perceptually similar to the input exemplar, which is known to be example-based dynamic texture synthesis (EDTS). Numerous approaches have been devoted to this problem, in the past decades, but none them are able to tackle all kinds of dynamic textures equally well. In this paper, we investigate the synthesizability of dynamic texture samples: given a dynamic texture sample, how synthesizable it is by using EDTS, and which EDTS method is the most suitable to synthesize it? To this end, we propose to learn regression models to connect dynamic texture samples with synthesizability scores, with the help of a compiled dynamic texture dataset annotated in terms of synthesizability. More precisely, we first define the synthesizability of DT samples and characterize them by a set of spatiotemporal features. Based on these features and an annotated dynamic texture dataset, we then train regression models to predict the synthesizability scores of texture samples and learn classifiers to select the most suitable EDTS methods. We further complete the selection, partition and synthesizability prediction of dynamic texture samples in a hierarchical scheme. We finally apply the learned synthesizability to detecting synthesizable regions in videos. The experiments demonstrate that our method can effectively learn and predict the synthesizability of DT samples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset