Learning ULMFiT and Self-Distillation with Calibration for Medical Dialogue System

07/20/2021
by   Shuang Ao, et al.
20

A medical dialogue system is essential for healthcare service as providing primary clinical advice and diagnoses. It has been gradually adopted and practiced in medical organizations in the form of a conversational bot, largely due to the advancement of NLP. In recent years, the introduction of state-of-the-art deep learning models and transfer learning techniques like Universal Language Model Fine Tuning (ULMFiT) and Knowledge Distillation (KD) largely contributes to the performance of NLP tasks. However, some deep neural networks are poorly calibrated and wrongly estimate the uncertainty. Hence the model is not trustworthy, especially in sensitive medical decision-making systems and safety tasks. In this paper, we investigate the well-calibrated model for ULMFiT and self-distillation (SD) in a medical dialogue system. The calibrated ULMFiT (CULMFiT) is obtained by incorporating label smoothing (LS), a commonly used regularization technique to achieve a well-calibrated model. Moreover, we apply the technique to recalibrate the confidence score called temperature scaling (TS) with KD to observe its correlation with network calibration. To further understand the relation between SD and calibration, we use both fixed and optimal temperatures to fine-tune the whole model. All experiments are conducted on the consultation backpain dataset collected by experts then further validated using a large publicly medial dialogue corpus. We empirically show that our proposed methodologies outperform conventional methods in terms of accuracy and robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2023

Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models

Fine-tuning large models is highly effective, however, inference using t...
research
09/11/2021

Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition

Despite impressive accuracy, deep neural networks are often miscalibrate...
research
08/02/2019

Self-Knowledge Distillation in Natural Language Processing

Since deep learning became a key player in natural language processing (...
research
10/26/2022

Fast Yet Effective Speech Emotion Recognition with Self-distillation

Speech emotion recognition (SER) is the task of recognising human's emot...
research
10/06/2020

Knowing What You Know: Calibrating Dialogue Belief State Distributions via Ensembles

The ability to accurately track what happens during a conversation is es...
research
12/30/2020

Linguistic calibration through metacognition: aligning dialogue agent responses with expected correctness

Open-domain dialogue agents have vastly improved, but still confidently ...
research
03/15/2023

Efficient Uncertainty Estimation with Gaussian Process for Reliable Dialog Response Retrieval

Deep neural networks have achieved remarkable performance in retrieval-b...

Please sign up or login with your details

Forgot password? Click here to reset