Similarity-DT: Kernel Similarity Embedding for Dynamic Texture Synthesis

11/11/2019
by   Shiming Chen, et al.
23

Dynamic texture (DT) exhibits statistical stationarity in the spatial domain and stochastic repetitiveness in the temporal dimension, indicating that different frames of DT possess high similarity correlation. However, there are no DT synthesis methods to consider the similarity prior for representing DT instead, which can explicitly capture the homogeneous and heterogeneous correlation between different frames of DT. In this paper, we propose a novel DT synthesis method (named Similarity-DT), which embeds the similarity prior into the representation of DT. Specifically, we first raise two hypotheses: the content of texture video frames varies over time-to-time, while the more closed frames should be more similar; the transition between frame-to-frame could be modeled as a linear or nonlinear function to capture the similarity correlation. Then, our proposed Similarity-DT integrates kernel learning and extreme learning machine (ELM) into a powerful unified synthesis model to learn kernel similarity embedding to represent the spatial-temporal transition among frame-to-frame of DTs. Extensive experiments on DT videos collected from internet and two benchmark datasets, i.e., Gatech Graphcut Textures and Dyntex, demonstrate that the learned kernel similarity embedding effectively exhibits the discriminative representation for DTs. Hence our method is capable of preserving long-term temporal continuity of the synthesized DT sequences with excellent sustainability and generalization. We also show that our method effectively generates realistic DT videos with fast speed and low computation, compared with the state-of-the-art approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 8

page 9

page 11

research
04/13/2021

Dynamic Texture Synthesis by Incorporating Long-range Spatial and Temporal Correlations

The main challenge of dynamic texture synthesis lies in how to maintain ...
research
09/26/2019

Learning Energy-based Spatial-Temporal Generative ConvNets for Dynamic Patterns

Video sequences contain rich dynamic patterns, such as dynamic texture p...
research
09/01/2018

Stochastic Video Long-term Interpolation

Video interpolation is aiming to generate intermediate sequence between ...
research
06/03/2016

Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet

Video sequences contain rich dynamic patterns, such as dynamic texture p...
research
08/31/2020

ALANET: Adaptive Latent Attention Network forJoint Video Deblurring and Interpolation

Existing works address the problem of generating high frame-rate sharp v...
research
08/15/2023

Dancing Avatar: Pose and Text-Guided Human Motion Videos Synthesis with Image Diffusion Model

The rising demand for creating lifelike avatars in the digital realm has...
research
09/06/2023

An Efficient Temporary Deepfake Location Approach Based Embeddings for Partially Spoofed Audio Detection

Partially spoofed audio detection is a challenging task, lying in the ne...

Please sign up or login with your details

Forgot password? Click here to reset