Perception and Semantic Aware Regularization for Sequential Confidence Calibration

05/31/2023
by   Zhenghua Peng, et al.
0

Deep sequence recognition (DSR) models receive increasing attention due to their superior application to various applications. Most DSR models use merely the target sequences as supervision without considering other related sequences, leading to over-confidence in their predictions. The DSR models trained with label smoothing regularize labels by equally and independently smoothing each token, reallocating a small value to other tokens for mitigating overconfidence. However, they do not consider tokens/sequences correlations that may provide more effective information to regularize training and thus lead to sub-optimal performance. In this work, we find tokens/sequences with high perception and semantic correlations with the target ones contain more correlated and effective information and thus facilitate more effective regularization. To this end, we propose a Perception and Semantic aware Sequence Regularization framework, which explore perceptively and semantically correlated tokens/sequences as regularization. Specifically, we introduce a semantic context-free recognition and a language model to acquire similar sequences with high perceptive similarities and semantic correlation, respectively. Moreover, over-confidence degree varies across samples according to their difficulties. Thus, we further design an adaptive calibration intensity module to compute a difficulty score for each samples to obtain finer-grained regularization. Extensive experiments on canonical sequence recognition tasks, including scene text and speech recognition, demonstrate that our method sets novel state-of-the-art results. Code is available at https://github.com/husterpzh/PSSR.

READ FULL TEXT

page 1

page 3

page 5

research
03/13/2023

Context-Aware Selective Label Smoothing for Calibrating Sequence Recognition Model

Despite the success of deep neural network (DNN) on sequential data (i.e...
research
09/11/2021

Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition

Despite impressive accuracy, deep neural networks are often miscalibrate...
research
01/29/2023

Confidence-Aware Calibration and Scoring Functions for Curriculum Learning

Despite the great success of state-of-the-art deep neural networks, seve...
research
08/06/2023

Two Sides of Miscalibration: Identifying Over and Under-Confidence Prediction for Network Calibration

Proper confidence calibration of deep neural networks is essential for r...
research
08/24/2022

Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models

The ability to extrapolate, i.e., to make predictions on sequences that ...
research
09/14/2023

Towards Universal Speech Discrete Tokens: A Case Study for ASR and TTS

Self-supervised learning (SSL) proficiency in speech-related tasks has d...
research
02/13/2023

Distinguishability Calibration to In-Context Learning

Recent years have witnessed increasing interests in prompt-based learnin...

Please sign up or login with your details

Forgot password? Click here to reset