Automatic punctuation restoration with BERT models

01/18/2021
by   Attila Nagy, et al.
0

We present an approach for automatic punctuation restoration with BERT models for English and Hungarian. For English, we conduct our experiments on Ted Talks, a commonly used benchmark for punctuation restoration, while for Hungarian we evaluate our models on the Szeged Treebank dataset. Our best models achieve a macro-averaged F_1-score of 79.8 in English and 82.2 in Hungarian. Our code is publicly available.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/24/2021

Diacritics Restoration using BERT with Analysis on Czech language

We propose a new architecture for diacritics restoration based on contex...
10/23/2021

PhoMT: A High-Quality and Large-Scale Benchmark Dataset for Vietnamese-English Machine Translation

We introduce a high-quality and large-scale Vietnamese-English parallel ...
10/06/2021

PSG@HASOC-Dravidian CodeMixFIRE2021: Pretrained Transformers for Offensive Language Identification in Tanglish

This paper describes the system submitted to Dravidian-Codemix-HASOC2021...
07/10/2019

Dunhuang Grotto Painting Dataset and Benchmark

This document introduces the background and the usage of the Dunhuang Gr...
07/10/2019

Dunhuang Grottoes Painting Dataset and Benchmark

This document introduces the background and the usage of the Dunhuang Gr...
04/06/2021

SERRANT: a syntactic classifier for English Grammatical Error Types

SERRANT is a system and code for automatic classification of English gra...
10/18/2021

ViraPart: A Text Refinement Framework for ASR and NLP Tasks in Persian

The Persian language is an inflectional SOV language. This fact makes Pe...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.