Rethinking Label Smoothing on Multi-hop Question Answering

12/19/2022
by   Zhangyue Yin, et al.
0

Label smoothing is a regularization technique widely used in supervised learning to improve the generalization of models on various tasks, such as image classification and machine translation. However, the effectiveness of label smoothing in multi-hop question answering (MHQA) has yet to be well studied. In this paper, we systematically analyze the role of label smoothing on various modules of MHQA and propose F1 smoothing, a novel label smoothing technique specifically designed for machine reading comprehension (MRC) tasks. We evaluate our method on the HotpotQA dataset and demonstrate its superiority over several strong baselines, including models that utilize complex attention mechanisms. Our results suggest that label smoothing can be effective in MHQA, but the choice of smoothing strategy can significantly affect performance.

READ FULL TEXT
research
10/16/2019

Why can't memory networks read effectively?

Memory networks have been a popular choice among neural architectures fo...
research
01/23/2017

Regularizing Neural Networks by Penalizing Confident Output Distributions

We systematically explore regularizing neural networks by penalizing low...
research
08/29/2022

Temporal Label Smoothing for Early Prediction of Adverse Events

Models that can predict adverse events ahead of time with low false-alar...
research
10/23/2020

An Investigation of how Label Smoothing Affects Generalization

It has been hypothesized that label smoothing can reduce overfitting and...
research
06/13/2023

Improving Opinion-based Question Answering Systems Through Label Error Detection and Overwrite

Label error is a ubiquitous problem in annotated data. Large amounts of ...
research
05/08/2023

LABO: Towards Learning Optimal Label Regularization via Bi-level Optimization

Regularization techniques are crucial to improving the generalization pe...
research
11/27/2019

Label Dependent Deep Variational Paraphrase Generation

Generating paraphrases that are lexically similar but semantically diffe...

Please sign up or login with your details

Forgot password? Click here to reset