Comparison of Pre-trained Language Models for Turkish Address Parsing

06/24/2023
by   Muhammed Cihat Ünal, et al.
0

Transformer based pre-trained models such as BERT and its variants, which are trained on large corpora, have demonstrated tremendous success for natural language processing (NLP) tasks. Most of academic works are based on the English language; however, the number of multilingual and language specific studies increase steadily. Furthermore, several studies claimed that language specific models outperform multilingual models in various tasks. Therefore, the community tends to train or fine-tune the models for the language of their case study, specifically. In this paper, we focus on Turkish maps data and thoroughly evaluate both multilingual and Turkish based BERT, DistilBERT, ELECTRA and RoBERTa. Besides, we also propose a MultiLayer Perceptron (MLP) for fine-tuning BERT in addition to the standard approach of one-layer fine-tuning. For the dataset, a mid-sized Address Parsing corpus taken with a relatively high quality is constructed. Conducted experiments on this dataset indicate that Turkish language specific models with MLP fine-tuning yields slightly better results when compared to the multilingual fine-tuned models. Moreover, visualization of address tokens' representations further indicates the effectiveness of BERT variants for classifying a variety of addresses.

READ FULL TEXT

page 8

page 9

research
01/17/2020

RobBERT: a Dutch RoBERTa-based Language Model

Pre-trained language models have been dominating the field of natural la...
research
04/08/2021

Uppsala NLP at SemEval-2021 Task 2: Multilingual Language Models for Fine-tuning and Feature Extraction in Word-in-Context Disambiguation

We describe the Uppsala NLP submission to SemEval-2021 Task 2 on multili...
research
09/14/2021

On the Language-specificity of Multilingual BERT and the Impact of Fine-tuning

Recent work has shown evidence that the knowledge acquired by multilingu...
research
12/22/2022

CAMeMBERT: Cascading Assistant-Mediated Multilingual BERT

Large language models having hundreds of millions, and even billions, of...
research
03/24/2023

Chat2VIS: Fine-Tuning Data Visualisations using Multilingual Natural Language Text and Pre-Trained Large Language Models

The explosion of data in recent years is driving individuals to leverage...
research
05/31/2022

Multilingual Transformers for Product Matching – Experiments and a New Benchmark in Polish

Product matching corresponds to the task of matching identical products ...
research
04/23/2021

Optimizing small BERTs trained for German NER

Currently, the most widespread neural network architecture for training ...

Please sign up or login with your details

Forgot password? Click here to reset