Selective Style Transfer for Text

06/04/2019
by   Raul Gomez, et al.
0

This paper explores the possibilities of image style transfer applied to text maintaining the original transcriptions. Results on different text domains (scene text, machine printed text and handwritten text) and cross modal results demonstrate that this is feasible, and open different research lines. Furthermore, two architectures for selective style transfer, which means transferring style to only desired image pixels, are proposed. Finally, scene text selective style transfer is evaluated as a data augmentation technique to expand scene text detection datasets, resulting in a boost of text detectors performance. Our implementation of the described models is publicly available.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
04/08/2021

XFORMAL: A Benchmark for Multilingual Formality Style Transfer

We take the first step towards multilingual style transfer by creating a...
research
12/24/2022

Meta-Learning for Color-to-Infrared Cross-Modal Style Transfer

Recent object detection models for infrared (IR) imagery are based upon ...
research
08/23/2023

ARF-Plus: Controlling Perceptual Factors in Artistic Radiance Fields for 3D Scene Stylization

The radiance fields style transfer is an emerging field that has recentl...
research
02/17/2023

Paint it Black: Generating paintings from text descriptions

Two distinct tasks - generating photorealistic pictures from given text ...
research
11/19/2022

DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization

Despite the impressive results of arbitrary image-guided style transfer ...
research
12/20/2022

SimpleStyle: An Adaptable Style Transfer Approach

Attribute-controlled text rewriting, also known as text style-transfer, ...
research
12/02/2021

StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions

We apply style transfer on mesh reconstructions of indoor scenes. This e...

Please sign up or login with your details

Forgot password? Click here to reset