Texture Reformer: Towards Fast and Universal Interactive Texture Transfer

12/06/2021
by   Zhizhong Wang, et al.
0

In this paper, we present the texture reformer, a fast and universal neural-based framework for interactive texture transfer with user-specified guidance. The challenges lie in three aspects: 1) the diversity of tasks, 2) the simplicity of guidance maps, and 3) the execution efficiency. To address these challenges, our key idea is to use a novel feed-forward multi-view and multi-stage synthesis procedure consisting of I) a global view structure alignment stage, II) a local view texture refinement stage, and III) a holistic effect enhancement stage to synthesize high-quality results with coherent structures and fine texture details in a coarse-to-fine fashion. In addition, we also introduce a novel learning-free view-specific texture reformation (VSTR) operation with a new semantic map guidance strategy to achieve more accurate semantic-guided and structure-preserved texture transfer. The experimental results on a variety of application scenarios demonstrate the effectiveness and superiority of our framework. And compared with the state-of-the-art interactive texture transfer algorithms, it not only achieves higher quality results but, more remarkably, also is 2-5 orders of magnitude faster. Code is available at https://github.com/EndyWon/Texture-Reformer.

READ FULL TEXT

page 7

page 13

page 15

page 16

page 20

page 21

page 22

page 24

research
10/09/2020

A deep learning based interactive sketching system for fashion images design

In this work, we propose an interactive system to design diverse high-qu...
research
03/09/2023

CFR-ICL: Cascade-Forward Refinement with Iterative Click Loss for Interactive Image Segmentation

The click-based interactive segmentation aims to extract the object of i...
research
10/11/2019

Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

Automatic generation of artistic glyph images is a challenging task that...
research
03/13/2023

OverlapNetVLAD: A Coarse-to-Fine Framework for LiDAR-based Place Recognition

Place recognition is a challenging yet crucial task in robotics. Existin...
research
05/26/2020

Region-adaptive Texture Enhancement for Detailed Person Image Synthesis

The ability to produce convincing textural details is essential for the ...
research
08/18/2023

Leveraging Intrinsic Properties for Non-Rigid Garment Alignment

We address the problem of aligning real-world 3D data of garments, which...
research
02/23/2022

Paying U-Attention to Textures: Multi-Stage Hourglass Vision Transformer for Universal Texture Synthesis

We present a novel U-Attention vision Transformer for universal texture ...

Please sign up or login with your details

Forgot password? Click here to reset