Punctuation restoration in Swedish through fine-tuned KB-BERT

02/14/2022
by   John Björkman Nilsson, et al.
0

Presented here is a method for automatic punctuation restoration in Swedish using a BERT model. The method is based on KB-BERT, a publicly available, neural network language model pre-trained on a Swedish corpus by National Library of Sweden. This model has then been fine-tuned for this specific task using a corpus of government texts. With a lower-case and unpunctuated Swedish text as input, the model is supposed to return a grammatically correct punctuated copy of the text as output. A successful solution to this problem brings benefits for an array of NLP domains, such as speech-to-text and automated text. Only the punctuation marks period, comma and question marks were considered for the project, due to a lack of data for more rare marks such as semicolon. Additionally, some marks are somewhat interchangeable with the more common, such as exclamation points and periods. Thus, the data set had all exclamation points replaced with periods. The fine-tuned Swedish BERT model, dubbed prestoBERT, achieved an overall F1-score of 78.9. The proposed model scored similarly to international counterparts, with Hungarian and Chinese models obtaining F1-scores of 82.2 and 75.6 respectively. As further comparison, a human evaluation case study was carried out. The human test group achieved an overall F1-score of 81.7, but scored substantially worse than prestoBERT on both period and comma. Inspecting output sentences from the model and humans show satisfactory results, despite the difference in F1-score. The disconnect seems to stem from an unnecessary focus on replicating the exact same punctuation used in the test set, rather than providing any of the number of correct interpretations. If the loss function could be rewritten to reward all grammatically correct outputs, rather than only the one original example, the performance could improve significantly for both prestoBERT and the human group.

READ FULL TEXT

page 30

page 34

research
12/20/2022

A Twitter BERT Approach for Offensive Language Detection in Marathi

Automated offensive language detection is essential in combating the spr...
research
08/15/2023

Finding Stakeholder-Material Information from 10-K Reports using Fine-Tuned BERT and LSTM Models

All public companies are required by federal securities law to disclose ...
research
04/18/2022

Ingredient Extraction from Text in the Recipe Domain

In recent years, there has been an increase in the number of devices wit...
research
03/11/2022

verBERT: Automating Brazilian Case Law Document Multi-label Categorization Using BERT

In this work, we carried out a study about the use of attention-based al...
research
08/21/2020

Fine-tune BERT for E-commerce Non-Default Search Ranking

The quality of non-default ranking on e-commerce platforms, such as base...
research
10/18/2021

ViraPart: A Text Refinement Framework for ASR and NLP Tasks in Persian

The Persian language is an inflectional SOV language. This fact makes Pe...
research
12/07/2022

TweetDrought: A Deep-Learning Drought Impacts Recognizer based on Twitter Data

Acquiring a better understanding of drought impacts becomes increasingly...

Please sign up or login with your details

Forgot password? Click here to reset