3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer

11/26/2020
by   Mattia Segù, et al.
13

Transferring the style from one image onto another is a popular and widely studied task in computer vision. Yet, learning-based style transfer in the 3D setting remains a largely unexplored problem. To our knowledge, we propose the first learning-based generative approach for style transfer between 3D objects. Our method allows to combine the content and style of a source and target 3D model to generate a novel shape that resembles in style the target while retaining the source content. The proposed framework can synthesize new 3D shapes both in the form of point clouds and meshes. Furthermore, we extend our technique to implicitly learn the underlying multimodal style distribution of the individual category domains. By sampling style codes from the learned distributions, we increase the variety of styles that our model can confer to a given reference object. Experimental results validate the effectiveness of the proposed 3D style transfer method on a number of benchmarks.

READ FULL TEXT

page 8

page 13

page 14

research
06/13/2018

A Unified Framework for Generalizable Style Transfer: Style and Content Separation

Image style transfer has drawn broad attention in recent years. However,...
research
06/02/2020

Distribution Aligned Multimodal and Multi-Domain Image Stylization

Multimodal and multi-domain stylization are two important problems in th...
research
03/24/2022

Industrial Style Transfer with Large-scale Geometric Warping and Content Preservation

We propose a novel style transfer method to quickly create a new visual ...
research
09/03/2021

3D Human Shape Style Transfer

We consider the problem of modifying/replacing the shape style of a real...
research
08/30/2021

3DStyleNet: Creating 3D Shapes with Geometric and Texture Style Variations

We propose a method to create plausible geometric and texture style vari...
research
12/13/2019

A Method for Arbitrary Instance Style Transfer

The ability to synthesize style and content of different images to form ...
research
10/20/2022

TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition

Creation of 3D content by stylization is a promising yet challenging pro...

Please sign up or login with your details

Forgot password? Click here to reset