Text is Text, No Matter What: Unifying Text Recognition using Knowledge Distillation

07/26/2021
by   Ayan Kumar Bhunia, et al.
0

Text recognition remains a fundamental and extensively researched topic in computer vision, largely owing to its wide array of commercial applications. The challenging nature of the very problem however dictated a fragmentation of research efforts: Scene Text Recognition (STR) that deals with text in everyday scenes, and Handwriting Text Recognition (HTR) that tackles hand-written text. In this paper, for the first time, we argue for their unification – we aim for a single model that can compete favourably with two separate state-of-the-art STR and HTR models. We first show that cross-utilisation of STR and HTR models trigger significant performance drops due to differences in their inherent challenges. We then tackle their union by introducing a knowledge distillation (KD) based framework. This is however non-trivial, largely due to the variable-length and sequential nature of text sequences, which renders off-the-shelf KD techniques that mostly works with global fixed-length data inadequate. For that, we propose three distillation losses all of which are specifically designed to cope with the aforementioned unique characteristics of text recognition. Empirical evidence suggests that our proposed unified model performs on par with individual models, even surpassing them in certain cases. Ablative studies demonstrate that naive baselines such as a two-stage framework, and domain adaption/generalisation alternatives do not work as well, further verifying the appropriateness of our design.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2020

DGD: Densifying the Knowledge of Neural Networks with Filter Grafting and Knowledge Distillation

With a fixed model structure, knowledge distillation and filter grafting...
research
10/25/2020

Empowering Knowledge Distillation via Open Set Recognition for Robust 3D Point Cloud Classification

Real-world scenarios pose several challenges to deep learning based comp...
research
03/07/2023

Adaptive Knowledge Distillation between Text and Speech Pre-trained Models

Learning on a massive amount of speech corpus leads to the recent succes...
research
06/09/2021

Knowledge distillation: A good teacher is patient and consistent

There is a growing discrepancy in computer vision between large-scale mo...
research
02/21/2023

Two-in-one Knowledge Distillation for Efficient Facial Forgery Detection

Facial forgery detection is a crucial but extremely challenging topic, w...
research
09/30/2021

Born Again Neural Rankers

We introduce Born Again neural Rankers (BAR) in the Learning to Rank (LT...
research
10/07/2020

DiPair: Fast and Accurate Distillation for Trillion-Scale Text Matching and Pair Modeling

Pre-trained models like BERT (Devlin et al., 2018) have dominated NLP / ...

Please sign up or login with your details

Forgot password? Click here to reset