Evaluating Synthetic Pre-Training for Handwriting Processing Tasks

04/04/2023
by   Vittorio Pippi, et al.
0

In this work, we explore massive pre-training on synthetic word images for enhancing the performance on four benchmark downstream handwriting analysis tasks. To this end, we build a large synthetic dataset of word images rendered in several handwriting fonts, which offers a complete supervision signal. We use it to train a simple convolutional neural network (ConvNet) with a fully supervised objective. The vector representations of the images obtained from the pre-trained ConvNet can then be considered as encodings of the handwriting style. We exploit such representations for Writer Retrieval, Writer Identification, Writer Verification, and Writer Classification and demonstrate that our pre-training strategy allows extracting rich representations of the writers' style that enable the aforementioned tasks with competitive results with respect to task-specific State-of-the-Art approaches.

READ FULL TEXT
research
12/06/2019

ClusterFit: Improving Generalization of Visual Representations

Pre-training convolutional neural networks with weakly-supervised and se...
research
06/21/2022

Insights into Pre-training via Simpler Synthetic Tasks

Pre-training produces representations that are effective for a wide rang...
research
01/28/2019

Using Pre-Training Can Improve Model Robustness and Uncertainty

Tuning a pre-trained network is commonly thought to improve data efficie...
research
03/30/2021

DAP: Detection-Aware Pre-training with Weak Supervision

This paper presents a detection-aware pre-training (DAP) approach, which...
research
12/26/2017

Advances in Pre-Training Distributed Word Representations

Many Natural Language Processing applications nowadays rely on pre-train...
research
12/03/2009

Behavior and performance of the deep belief networks on image classification

We apply deep belief networks of restricted Boltzmann machines to bags o...
research
03/11/2022

Masked Visual Pre-training for Motor Control

This paper shows that self-supervised visual pre-training from real-worl...

Please sign up or login with your details

Forgot password? Click here to reset