Font Shape-to-Impression Translation

03/11/2022
by   Masaya Ueda, et al.
0

Different fonts have different impressions, such as elegant, scary, and cool. This paper tackles part-based shape-impression analysis based on the Transformer architecture, which is able to handle the correlation among local parts by its self-attention mechanism. This ability will reveal how combinations of local parts realize a specific impression of a font. The versatility of Transformer allows us to realize two very different approaches for the analysis, i.e., multi-label classification and translation. A quantitative evaluation shows that our Transformer-based approaches estimate the font impressions from a set of local parts more accurately than other approaches. A qualitative evaluation then indicates the important local parts for a specific impression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2021

Which Parts determine the Impression of the Font?

Various fonts give different impressions, such as legible, rough, and co...
research
03/01/2023

Label Attention Network for sequential multi-label classification

Multi-label classification is a natural problem statement for sequential...
research
11/01/2018

Hybrid Self-Attention Network for Machine Translation

The encoder-decoder is the typical framework for Neural Machine Translat...
research
04/08/2022

Points to Patches: Enabling the Use of Self-Attention for 3D Shape Recognition

While the Transformer architecture has become ubiquitous in the machine ...
research
09/05/2022

SEFormer: Structure Embedding Transformer for 3D Object Detection

Effectively preserving and encoding structure features from objects in i...
research
12/22/2020

Multi-Head Self-Attention with Role-Guided Masks

The state of the art in learning meaningful semantic representations of ...

Please sign up or login with your details

Forgot password? Click here to reset