MEDBERT.de: A Comprehensive German BERT Model for the Medical Domain

03/14/2023
by   Keno K. Bressem, et al.
0

This paper presents medBERTde, a pre-trained German BERT model specifically designed for the German medical domain. The model has been trained on a large corpus of 4.7 Million German medical documents and has been shown to achieve new state-of-the-art performance on eight different medical benchmarks covering a wide range of disciplines and medical document types. In addition to evaluating the overall performance of the model, this paper also conducts a more in-depth analysis of its capabilities. We investigate the impact of data deduplication on the model's performance, as well as the potential benefits of using more efficient tokenization methods. Our results indicate that domain-specific models such as medBERTde are particularly useful for longer texts, and that deduplication of training data does not necessarily lead to improved performance. Furthermore, we found that efficient tokenization plays only a minor role in improving model performance, and attribute most of the improved performance to the large amount of training data. To encourage further research, the pre-trained model weights and new benchmarks based on radiological data are made publicly available for use by the scientific community.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2020

German's Next Language Model

In this work we present the experiments which lead to the creation of ou...
research
04/15/2021

SINA-BERT: A pre-trained Language Model for Analysis of Medical Texts in Persian

We have released Sina-BERT, a language model pre-trained on BERT (Devlin...
research
07/03/2023

ALBERTI, a Multilingual Domain Specific Language Model for Poetry Analysis

The computational analysis of poetry is limited by the scarcity of tools...
research
08/30/2022

Annotated Dataset Creation through General Purpose Language Models for non-English Medical NLP

Obtaining text datasets with semantic annotations is an effortful proces...
research
07/21/2023

Multimodal Document Analytics for Banking Process Automation

In response to growing FinTech competition and the need for improved ope...
research
11/28/2022

Large Pre-Trained Models with Extra-Large Vocabularies: A Contrastive Analysis of Hebrew BERT Models and a New One to Outperform Them All

We present a new pre-trained language model (PLM) for modern Hebrew, ter...
research
11/28/2022

Automatically Extracting Information in Medical Dialogue: Expert System And Attention for Labelling

Medical dialogue information extraction is becoming an increasingly sign...

Please sign up or login with your details

Forgot password? Click here to reset