HeRo: RoBERTa and Longformer Hebrew Language Models

04/18/2023
by   Vitaly Shalumov, et al.
0

In this paper, we fill in an existing gap in resources available to the Hebrew NLP community by providing it with the largest so far pre-train dataset HeDC4, a state-of-the-art pre-trained language model HeRo for standard length inputs and an efficient transformer LongHeRo for long input sequences. The HeRo model was evaluated on the sentiment analysis, the named entity recognition, and the question answering tasks while the LongHeRo model was evaluated on the document classification task with a dataset composed of long documents. Both HeRo and LongHeRo presented state-of-the-art performance. The dataset and model checkpoints used in this work are publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2020

ParsBERT: Transformer-based Model for Persian Language Understanding

The surge of pre-trained language models has begun a new era in the fiel...
research
04/08/2021

AlephBERT:A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With

Large Pre-trained Language Models (PLMs) have become ubiquitous in the d...
research
07/20/2023

UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition

Pre-trained transformer language models (LMs) have in recent years becom...
research
10/21/2022

LittleBird: Efficient Faster Longer Transformer for Question Answering

BERT has shown a lot of sucess in a wide variety of NLP tasks. But it ha...
research
09/05/2023

Language Models for Novelty Detection in System Call Traces

Due to the complexity of modern computer systems, novel and unexpected b...
research
04/19/2021

BERTić – The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian

In this paper we describe a transformer model pre-trained on 8 billion t...

Please sign up or login with your details

Forgot password? Click here to reset