Automatic Assignment of Radiology Examination Protocols Using Pre-trained Language Models with Knowledge Distillation

09/01/2020
by   Wilson Lau, et al.
0

Selecting radiology examination protocol is a repetitive, error-prone, and time-consuming process. In this paper, we present a deep learning approach to automatically assign protocols to computer tomography examinations, by pre-training a domain-specific BERT model (BERT_rad). To handle the high data imbalance across exam protocols, we used a knowledge distillation approach that up-sampled the minority classes through data augmentation. We compared classification performance of the described approach with the statistical n-gram models using Support Vector Machine (SVM) and Random Forest (RF) classifiers, as well as the Google's BERT_base model. SVM and RF achieved macro-averaged F1 scores of 0.45 and 0.6 while BERT_base and BERT_rad achieved 0.61 and 0.63. Knowledge distillation improved overall performance on the minority classes, achieving a F1 score of 0.66. Additionally, by choosing the optimal threshold, the BERT models could classify over 50 within 5 workload.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2021

MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation

The advent of large pre-trained language models has given rise to rapid ...
research
04/15/2022

CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation

Knowledge distillation (KD) is an efficient framework for compressing la...
research
03/13/2023

Enhancing COVID-19 Severity Analysis through Ensemble Methods

Computed Tomography (CT) scans provide a detailed image of the lungs, al...
research
12/14/2020

LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding

The pre-training models such as BERT have achieved great results in vari...
research
03/07/2023

PreFallKD: Pre-Impact Fall Detection via CNN-ViT Knowledge Distillation

Fall accidents are critical issues in an aging and aged society. Recentl...
research
09/15/2021

EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation

Pre-trained language models have shown remarkable results on various NLP...
research
04/12/2022

Overlapping Word Removal is All You Need: Revisiting Data Imbalance in Hope Speech Detection

Hope Speech Detection, a task of recognizing positive expressions, has m...

Please sign up or login with your details

Forgot password? Click here to reset