Language Identification with a Reciprocal Rank Classifier

09/20/2021
by   Dominic Widdows, et al.
0

Language identification is a critical component of language processing pipelines (Jauhiainen et al.,2019) and is not a solved problem in real-world settings. We present a lightweight and effective language identifier that is robust to changes of domain and to the absence of copious training data. The key idea for classification is that the reciprocal of the rank in a frequency table makes an effective additive feature score, hence the term Reciprocal Rank Classifier (RRC). The key finding for language classification is that ranked lists of words and frequencies of characters form a sufficient and robust representation of the regularities of key languages and their orthographies. We test this on two 22-language data sets and demonstrate zero-effort domain adaptation from a Wikipedia training set to a Twitter test set. When trained on Wikipedia but applied to Twitter the macro-averaged F1-score of a conventionally trained SVM classifier drops from 90.9 the macro F1-score of RRC drops only from 93.1 compared with those from fastText and langid. The RRC performs better than these established systems in most experiments, especially on short Wikipedia texts and Twitter. The RRC classifier can be improved for particular domains and conversational situations by adding words to the ranked lists. Using new terms learned from such conversations, we demonstrate a further 7.9 sample message classification, and 1.7 classification. Surprisingly, this made results on Twitter data slightly worse. The RRC classifier is available as an open source Python package (https://github.com/LivePersonInc/lplangid).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2021

IIITG-ADBU@HASOC-Dravidian-CodeMix-FIRE2020: Offensive Content Detection in Code-Mixed Dravidian Text

This paper presents the results obtained by our SVM and XLM-RoBERTa base...
research
08/31/2023

Link Prediction for Wikipedia Articles as a Natural Language Inference Task

Link prediction task is vital to automatically understanding the structu...
research
06/09/2019

UBC-NLP at SemEval-2019 Task 6:Ensemble Learning of Offensive Content With Enhanced Training Data

We examine learning offensive content on Twitter with limited, imbalance...
research
01/13/2021

Uzbek Cyrillic-Latin-Cyrillic Machine Transliteration

In this paper, we introduce a data-driven approach to transliterating Uz...
research
11/12/2021

Extraction of Medication Names from Twitter Using Augmentation and an Ensemble of Language Models

The BioCreative VII Track 3 challenge focused on the identification of m...
research
07/22/2018

German Dialect Identification Using Classifier Ensembles

In this paper we present the GDI_classification entry to the second Germ...
research
11/13/2022

Language Model Classifier Aligns Better with Physician Word Sensitivity than XGBoost on Readmission Prediction

Traditional evaluation metrics for classification in natural language pr...

Please sign up or login with your details

Forgot password? Click here to reset