DeepAI AI Chat
Log In Sign Up

Text is Text, No Matter What: Unifying Text Recognition using Knowledge Distillation

07/26/2021
by   Ayan Kumar Bhunia, et al.
0

Text recognition remains a fundamental and extensively researched topic in computer vision, largely owing to its wide array of commercial applications. The challenging nature of the very problem however dictated a fragmentation of research efforts: Scene Text Recognition (STR) that deals with text in everyday scenes, and Handwriting Text Recognition (HTR) that tackles hand-written text. In this paper, for the first time, we argue for their unification – we aim for a single model that can compete favourably with two separate state-of-the-art STR and HTR models. We first show that cross-utilisation of STR and HTR models trigger significant performance drops due to differences in their inherent challenges. We then tackle their union by introducing a knowledge distillation (KD) based framework. This is however non-trivial, largely due to the variable-length and sequential nature of text sequences, which renders off-the-shelf KD techniques that mostly works with global fixed-length data inadequate. For that, we propose three distillation losses all of which are specifically designed to cope with the aforementioned unique characteristics of text recognition. Empirical evidence suggests that our proposed unified model performs on par with individual models, even surpassing them in certain cases. Ablative studies demonstrate that naive baselines such as a two-stage framework, and domain adaption/generalisation alternatives do not work as well, further verifying the appropriateness of our design.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/26/2020

DGD: Densifying the Knowledge of Neural Networks with Filter Grafting and Knowledge Distillation

With a fixed model structure, knowledge distillation and filter grafting...
10/25/2020

Empowering Knowledge Distillation via Open Set Recognition for Robust 3D Point Cloud Classification

Real-world scenarios pose several challenges to deep learning based comp...
03/07/2023

Adaptive Knowledge Distillation between Text and Speech Pre-trained Models

Learning on a massive amount of speech corpus leads to the recent succes...
06/09/2021

Knowledge distillation: A good teacher is patient and consistent

There is a growing discrepancy in computer vision between large-scale mo...
05/16/2022

Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt

Data-free knowledge distillation (DFKD) conducts knowledge distillation ...
05/04/2022

Impact of a DCT-driven Loss in Attention-based Knowledge-Distillation for Scene Recognition

Knowledge Distillation (KD) is a strategy for the definition of a set of...