Learning structures of the French clinical language:development and validation of word embedding models using 21 million clinical reports from electronic health records

07/26/2022
by   Basile Dura, et al.
0

Background Clinical studies using real-world data may benefit from exploiting clinical reports, a particularly rich albeit unstructured medium. To that end, natural language processing can extract relevant information. Methods based on transfer learning using pre-trained language models have achieved state-of-the-art results in most NLP applications; however, publicly available models lack exposure to speciality-languages, especially in the medical field. Objective We aimed to evaluate the impact of adapting a language model to French clinical reports on downstream medical NLP tasks. Methods We leveraged a corpus of 21M clinical reports collected from August 2017 to July 2021 at the Greater Paris University Hospitals (APHP) to produce two CamemBERT architectures on speciality language: one retrained from scratch and the other using CamemBERT as its initialisation. We used two French annotated medical datasets to compare our language models to the original CamemBERT network, evaluating the statistical significance of improvement with the Wilcoxon test. Results Our models pretrained on clinical reports increased the average F1-score on APMed (an APHP-specific task) by 3 percentage points to 91 significant improvement. They also achieved performance comparable to the original CamemBERT on QUAERO. These results hold true for the fine-tuned and from-scratch versions alike, starting from very few pre-training samples. Conclusions We confirm previous literature showing that adapting generalist pre-train language models such as CamenBERT on speciality corpora improves their performance for downstream clinical NLP tasks. Our results suggest that retraining from scratch does not induce a statistically significant performance gain compared to fine-tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2023

EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing

The utilization of clinical reports for various secondary purposes, incl...
research
06/16/2023

ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation

Large language models have exhibited exceptional performance on various ...
research
12/11/2019

FlauBERT: Unsupervised Language Model Pre-training for French

Language models have become a key step to achieve state-of-the-art resul...
research
05/27/2023

An Investigation into the Effects of Pre-training Data Distributions for Pathology Report Classification

Pre-trained transformer models have demonstrated success across many nat...
research
01/13/2023

The 2022 n2c2/UW Shared Task on Extracting Social Determinants of Health

Objective: The n2c2/UW SDOH Challenge explores the extraction of social ...

Please sign up or login with your details

Forgot password? Click here to reset