RuMedBench: A Russian Medical Language Understanding Benchmark

01/17/2022
by   Pavel Blinov, et al.
0

The paper describes the open Russian medical language understanding benchmark covering several task types (classification, question answering, natural language inference, named entity recognition) on a number of novel text sets. Given the sensitive nature of the data in healthcare, such a benchmark partially closes the problem of Russian medical dataset absence. We prepare the unified format labeling, data split, and evaluation metrics for new tasks. The remaining tasks are from existing datasets with a few modifications. A single-number metric expresses a model's ability to cope with the benchmark. Moreover, we implement several baseline models, from simple ones to neural networks with transformer architecture, and release the code. Expectedly, the more advanced models yield better performance, but even a simple model is enough for a decent result in some tasks. Furthermore, for all tasks, we provide a human evaluation. Interestingly the models outperform humans in the large-scale classification tasks. However, the advantage of natural intelligence remains in the tasks requiring more knowledge and reasoning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/01/2020

KLEJ: Comprehensive Benchmark for Polish Language Understanding

In recent years, a series of Transformer-based models unlocked major imp...
research
06/15/2021

CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark

Artificial Intelligence (AI), along with the recent progress in biomedic...
research
04/27/2023

ViMQ: A Vietnamese Medical Question Dataset for Healthcare Dialogue System Development

Existing medical text datasets usually take the form of ques- tion and a...
research
04/09/2023

Are Large Language Models Ready for Healthcare? A Comparative Study on Clinical Language Understanding

Large language models (LLMs) have made significant progress in various d...
research
08/13/2023

An Ensemble Approach to Question Classification: Integrating Electra Transformer, GloVe, and LSTM

This paper introduces a novel ensemble approach for question classificat...
research
04/23/2018

Towards a Unified Natural Language Inference Framework to Evaluate Sentence Representations

We present a large scale unified natural language inference (NLI) datase...
research
05/20/2021

KLUE: Korean Language Understanding Evaluation

We introduce Korean Language Understanding Evaluation (KLUE) benchmark. ...

Please sign up or login with your details

Forgot password? Click here to reset