Motion Transformer for Unsupervised Image Animation

09/28/2022
by   Jiale Tao, et al.
0

Image animation aims to animate a source image by using motion learned from a driving video. Current state-of-the-art methods typically use convolutional neural networks (CNNs) to predict motion information, such as motion keypoints and corresponding local transformations. However, these CNN based methods do not explicitly model the interactions between motions; as a result, the important underlying motion relationship may be neglected, which can potentially lead to noticeable artifacts being produced in the generated animation video. To this end, we propose a new method, the motion transformer, which is the first attempt to build a motion estimator based on a vision transformer. More specifically, we introduce two types of tokens in our proposed method: i) image tokens formed from patch features and corresponding position encoding; and ii) motion tokens encoded with motion information. Both types of tokens are sent into vision transformers to promote underlying interactions between them through multi-head self attention blocks. By adopting this process, the motion information can be better learned to boost the model performance. The final embedded motion tokens are then used to predict the corresponding motion keypoints and local transformations. Extensive experiments on benchmark datasets show that our proposed method achieves promising results to the state-of-the-art baselines. Our source code will be public available.

READ FULL TEXT

page 12

page 14

research
06/04/2021

RegionViT: Regional-to-Local Attention for Vision Transformers

Vision transformer (ViT) has recently showed its strong capability in ac...
research
11/14/2022

CabViT: Cross Attention among Blocks for Vision Transformer

Since the vision transformer (ViT) has achieved impressive performance i...
research
03/15/2022

Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution

Recent vision transformers along with self-attention have achieved promi...
research
07/12/2021

The Brownian motion in the transformer model

Transformer is the state of the art model for many language and visual t...
research
06/06/2021

Transformer in Convolutional Neural Networks

We tackle the low-efficiency flaw of vision transformer caused by the hi...
research
03/21/2023

Learning A Sparse Transformer Network for Effective Image Deraining

Transformers-based methods have achieved significant performance in imag...
research
02/20/2023

STB-VMM: Swin Transformer Based Video Motion Magnification

The goal of video motion magnification techniques is to magnify small mo...

Please sign up or login with your details

Forgot password? Click here to reset