TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition

10/20/2022
by   Yongwei Chen, et al.
0

Creation of 3D content by stylization is a promising yet challenging problem in computer vision and graphics research. In this work, we focus on stylizing photorealistic appearance renderings of a given surface mesh of arbitrary topology. Motivated by the recent surge of cross-modal supervision of the Contrastive Language-Image Pre-training (CLIP) model, we propose TANGO, which transfers the appearance style of a given 3D shape according to a text prompt in a photorealistic manner. Technically, we propose to disentangle the appearance style as the spatially varying bidirectional reflectance distribution function, the local geometric variation, and the lighting condition, which are jointly optimized, via supervision of the CLIP loss, by a spherical Gaussians based differentiable renderer. As such, TANGO enables photorealistic 3D style transfer by automatically predicting reflectance effects even for bare, low-quality meshes, without training on a task-specific dataset. Extensive experiments show that TANGO outperforms existing methods of text-driven 3D style transfer in terms of photorealistic quality, consistency of 3D geometry, and robustness when stylizing low-quality meshes. Our codes and results are available at our project webpage https://cyw-3d.github.io/tango/.

READ FULL TEXT

page 7

page 8

page 9

page 10

research
02/28/2022

Name Your Style: An Arbitrary Artist-aware Image Style Transfer

Image style transfer has attracted widespread attention in the past few ...
research
04/29/2021

Exemplar-Based 3D Portrait Stylization

Exemplar-based portrait stylization is widely attractive and highly desi...
research
11/26/2020

3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer

Transferring the style from one image onto another is a popular and wide...
research
03/24/2023

Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation

Automatic 3D content creation has achieved rapid progress recently due t...
research
12/15/2022

NeRF-Art: Text-Driven Neural Radiance Fields Stylization

As a powerful representation of 3D scenes, the neural radiance field (Ne...
research
03/16/2023

SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer from a Spectral Perspective

Contrastive Language-Image Pre-Training (CLIP) has refreshed the state o...
research
12/06/2021

Text2Mesh: Text-Driven Neural Stylization for Meshes

In this work, we develop intuitive controls for editing the style of 3D ...

Please sign up or login with your details

Forgot password? Click here to reset