Language Identification with a Reciprocal Rank Classifier

by   Dominic Widdows, et al.

Language identification is a critical component of language processing pipelines (Jauhiainen et al.,2019) and is not a solved problem in real-world settings. We present a lightweight and effective language identifier that is robust to changes of domain and to the absence of copious training data. The key idea for classification is that the reciprocal of the rank in a frequency table makes an effective additive feature score, hence the term Reciprocal Rank Classifier (RRC). The key finding for language classification is that ranked lists of words and frequencies of characters form a sufficient and robust representation of the regularities of key languages and their orthographies. We test this on two 22-language data sets and demonstrate zero-effort domain adaptation from a Wikipedia training set to a Twitter test set. When trained on Wikipedia but applied to Twitter the macro-averaged F1-score of a conventionally trained SVM classifier drops from 90.9 the macro F1-score of RRC drops only from 93.1 compared with those from fastText and langid. The RRC performs better than these established systems in most experiments, especially on short Wikipedia texts and Twitter. The RRC classifier can be improved for particular domains and conversational situations by adding words to the ranked lists. Using new terms learned from such conversations, we demonstrate a further 7.9 sample message classification, and 1.7 classification. Surprisingly, this made results on Twitter data slightly worse. The RRC classifier is available as an open source Python package (


page 1

page 2

page 3

page 4


IIITG-ADBU@HASOC-Dravidian-CodeMix-FIRE2020: Offensive Content Detection in Code-Mixed Dravidian Text

This paper presents the results obtained by our SVM and XLM-RoBERTa base...

Experiments in Cuneiform Language Identification

This paper presents methods to discriminate between languages and dialec...

UBC-NLP at SemEval-2019 Task 6:Ensemble Learning of Offensive Content With Enhanced Training Data

We examine learning offensive content on Twitter with limited, imbalance...

Uzbek Cyrillic-Latin-Cyrillic Machine Transliteration

In this paper, we introduce a data-driven approach to transliterating Uz...

Extraction of Medication Names from Twitter Using Augmentation and an Ensemble of Language Models

The BioCreative VII Track 3 challenge focused on the identification of m...

German Dialect Identification Using Classifier Ensembles

In this paper we present the GDI_classification entry to the second Germ...