DeepAI
Log In Sign Up

Language Model Adaptation for Language and Dialect Identification of Text

This article describes an unsupervised language model adaptation approach that can be used to enhance the performance of language identification methods. The approach is applied to a current version of the HeLI language identification method, which is now called HeLI 2.0. We describe the HeLI 2.0 method in detail. The resulting system is evaluated using the datasets from the German dialect identification and Indo-Aryan language identification shared tasks of the VarDial workshops 2017 and 2018. The new approach with language identification provides considerably higher F1-scores than the previous HeLI method or the other systems which participated in the shared tasks. The results indicate that unsupervised language model adaptation should be considered as an option in all language identification tasks, especially in those where encountering out-of-domain data is likely.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/27/2021

Factorized Neural Transducer for Efficient Language Model Adaptation

In recent years, end-to-end (E2E) based automatic speech recognition (AS...
03/05/2019

Language and Dialect Identification of Cuneiform Texts

This article introduces a corpus of cuneiform texts from which the datas...
03/09/2021

Comparing Approaches to Dravidian Language Identification

This paper describes the submissions by team HWR to the Dravidian Langua...
07/16/2017

Open-Set Language Identification

We present the first open-set language identification experiments using ...
09/30/2021

SlovakBERT: Slovak Masked Language Model

We introduce a new Slovak masked language model called SlovakBERT in thi...
07/28/2020

GUIR at SemEval-2020 Task 12: Domain-Tuned Contextualized Models for Offensive Language Detection

Offensive language detection is an important and challenging task in nat...
01/13/2017

LIDE: Language Identification from Text Documents

The increase in the use of microblogging came along with the rapid growt...

1 Introduction

Automatic language identification of text has been researched since the 1960s. It has been considered as a subspecies of general text categorization and most of the methods used are similar to those used in categorizing text according to their topic. However, deep learning techniques have not proven to be as efficient in language identification as they have been in other categorization tasks

[Medvedeva, Kroon,  PlankMedvedeva et al.2017].

For the past six years, we have been developing a language identifying method, which we call HeLI, for the Finno-Ugric Languages and the Internet project [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2015]. The HeLI method is a supervised general purpose language identification method relying on observations of word and character n

-gram frequencies from a language labeled corpus. The method is similar to Naive Bayes when using only relative frequencies of words as probabilities. Unlike Naive Bayes, it uses a back-off scheme to approximate the probabilities of individual words if the words themselves are not found in the language models. As language models, we use word unigrams and character level

n-grams. The optimal combination of the language models used with the back-off scheme depends on the situation and is determined empirically using a development set. The latest evolution of the HeLI method, HeLI 2.0, is described in this article.

One of the remaining difficult cases in language identification is the identification of language varieties or dialects. The task of language identification is less difficult if the set of possible languages does not include very similar languages. If we try to discriminate between very close languages or dialects, the task becomes increasingly more difficult [Tiedemann  LjubešićTiedemann  Ljubešić2012]. The first ones to experiment with language identification for close languages were Sibun and Reynar sibun1 who had Croatian, Serbian, and Slovak as part of their language repertoire. The differences between definitions of dialects and languages are not usually clearly defined, at least not in terms which would be able to help us automatically decide whether we are dealing with languages or dialects. Furthermore, the methods used for dialect identification are most of the time exactly the same as for general language identification. During the last five years, the state-of-the-art language identification methods have been put to the test in a series of shared tasks as part of VarDial workshops [Zampieri, Tan, Ljubešić,  TiedemannZampieri et al.2014, Zampieri, Tan, Ljubešić, Tiedemann,  NakovZampieri et al.2015, Malmasi, Zampieri, Ljubešić, Nakov, Ali,  TiedemannMalmasi et al.2016, Zampieri, Malmasi, Ljubešic, Nakov, Ali, Tiedemann, Scherrer,  AepliZampieri et al.2017, Zampieri, Malmasi, Nakov, Ali, Shuon, Glass, Scherrer, Samardžić, Ljubešić, Tiedemann, van der Lee, Grondelaers, Oostdijk, van den Bosch, Kumar, Lahiri,  JainZampieri et al.2018]. We have used the HeLI method and its variations in the shared tasks of the four latest VarDial workshops [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2015, Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2016, Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2017a, Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018a, Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018b, Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018c]. The HeLI method has proven to be robust and it competes well with other state-of-the-art language identification methods.

Another remaining difficult case in language identification is when the training data is not in the same domain as the data to be identified. Being out-of-domain can mean several things. For example, the training data can be from a different genre, different time period, and/or produced by different writers than the data to be identified. The identification accuracies are considerably lower on out-of-domain data [Li, Cohn,  BaldwinLi et al.2018] depending on the degree of out-of-domainness. The extreme example of in-domainness is when the training data and test data are from different parts of the same text as it has been in several language identification experiments in the past [Vatanen, Väyrynen,  VirpiojaVatanen et al.2010, BrownBrown2012, BrownBrown2013, BrownBrown2014]

. Classifiers can be more or less sensitive to the domain differences between the training and the testing data depending on the machine learning methods used

[Blodgett, Wei,  O’ConnorBlodgett et al.2017]. One way to diminish the effects of the phenomena is to create domain-general language models using adversarial supervision which reduces the amount of domain-specific information in the language models [Li, Cohn,  BaldwinLi et al.2018]. We suggest that another way is to use active language model adaptation.

In language model (LM) adaptation, we use the unlabelled mystery text itself to enhance the language models used by a language identifier. The language identification method used in combination with the language model adaptation approach presented in this article must be able to produce a confidence score of how well the identification has performed. As the language models are updated regularly while the identification is ongoing, the approach also benefits from the language identification method being non-discriminative. If the method is non-discriminative, all the training material does not have to be re-processed when adding new information into the language models. To our knowledge, language model adaptation has not been used in language identification of digital text before the first versions of the method presented in this article were used in the shared tasks of the 2018 VarDial workshop [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018a, Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018b, Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018c]. Concurrently with our current work, Ionescu and Butnaru ionescu2 presented an adaptive version of the Kernel Ridge Classifier which they evaluated on the Arabic Dialect Identification (ADI) dataset from the 2017 VarDial workshop [Zampieri, Malmasi, Ljubešic, Nakov, Ali, Tiedemann, Scherrer,  AepliZampieri et al.2017].

In this article, we first review the previous work relating to German dialect identification, Indo-Aryan language identification, and language model adaptation (Section 2

). We then present the methods used in the article: the HeLI 2.0 method for language identification, three confidence estimation methods, and the algorithm for language model adaptation (Section 

3). In Section 4, we introduce the datasets used for evaluating the methods and, in Section 5, we evaluate the methods and present the results of the experiments.

2 Related work

The first automatic language identifier for digital text was described by Mustonen mustonen1. Since this first article, hundreds of conference and journal articles describing language identification experiments and methods have been published. For a recent survey on language identification and the methods used in the literature see Jauhiainen et al. jauhiainen2018automatic. The HeLI method was first published in 2010 as part of a master’s thesis [JauhiainenJauhiainen2010], and has since been used, outside the VarDial workshops, for language set identification [Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2015] as well as general language identification with a large number of languages [Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2017b].

2.1 German dialect identification

German dialect identification has earlier been considered by Scherrer and Rambow scherrer1, who used a lexicon of dialectal words. Hollenstein and Aepli hollenstein1 experimented with a perplexity-based language identifier using character trigrams. They reached an average F-score of 0.66 on sentence level distinguishing between 5 German dialects.

The results of the first shared task on German dialect identification are described by Zampieri et al. zampieri9. Ten teams submitted results on the task utilizing a variety of machine learning methods used for language identification. The team MAZA [Malmasi  ZampieriMalmasi  Zampieri2017]

experimented with different types of support vector machine (SVM) ensembles: plurality voting, mean probability, and meta-classifier. The meta-classifier ensemble using the Random Forest algorithm for classification obtained the best results. The team

CECL [BestgenBestgen2017] used SVMs as well, and their best results were obtained using an additional procedure to equalize the number of sentences assigned to each category. Team CLUZH experimented with naïve Bayes (NB), conditional random fields (CRF), as well as a majority voting ensemble consisting of NB, CRF, and SVM [Clematide  MakarovClematide  Makarov2017]. Their best results were reached using CRF. Team qcri_mit used an ensemble of two SVMs and a stochastic gradient classifier (SGD). Team unibuckernel

experimented with different kernels using kernel ridge regression (KRR) and kernel discriminant analysis (KDA)

[Ionescu  ButnaruIonescu  Butnaru2017]. They obtained their best results using KRR based on the sum of three kernels. Team tubasfs [Coltekin  RamaColtekin  Rama2017] used SVMs with features weighted using sub-linear TF-IDF (product of term frequency and inverse document frequency) scaling. Team ahaqst used cross entropy (CE) with character and word n-grams [Hanani, Qaroush,  TaylorHanani et al.2017]. Team Citius_Ixa_Imaxin used perplexity with different features [Gamallo, Pichel,  AlegriaGamallo et al.2017]. Team XAC_Bayesline used NB [BarbaresiBarbaresi2017] and team deepCybErNetLong Short-Term Memory (LSTM) neural networks. We report the F1-scores obtained by the teams in Table 8 together with the results presented in this article.

The second shared task on German dialect identification was organized as part of the 2018 VarDial workshop [Zampieri, Malmasi, Nakov, Ali, Shuon, Glass, Scherrer, Samardžić, Ljubešić, Tiedemann, van der Lee, Grondelaers, Oostdijk, van den Bosch, Kumar, Lahiri,  JainZampieri et al.2018]. We participated in the shared task with an early version of the method described in this article and our submission using the language model adaptation scheme reached a clear first place [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018b]. Seven other teams submitted results on the shared task. Teams Twist Bytes [Benites, Grubenmann, von Däniken, von Grünigen, Deriu,  CieliebakBenites et al.2018], Tübingen-Oslo [Coltekin, Rama,  BlaschkeColtekin et al.2018], and GDI_classification [Ciobanu, Malmasi,  DinuCiobanu et al.2018a] used SVMs. The team safina

used convolutional neural networks (CNN) with direct one-hot encoded vectors, with an embedding layer, as well as with a Gated Recurrent Unit (GRU) layer

[AliAli2018a]. The team LaMa used a voting ensemble of eight classifiers. The best results for the team XAC were achieved using Naïve Bayes, but they experimented with Ridge regression and SGD classifiers as well [BarbaresiBarbaresi2018]. The team dkosmajac used normalized Euclidean distance. After the shared task, the team Twist Bytes was able to slightly improve their F1-score by using a higher number of features [Benites, Grubenmann, von Däniken, von Grünigen, Deriu,  CieliebakBenites et al.2018]. However, the exact number of included features was not determined using the development set, but it was the optimal number for the test set. Using the full set of features resulted again in a lower score. We report the F1-scores obtained by the teams in Table 11 together with the results obtained in this article.

2.2 Language identification for Devanagari script

Language identification research in distinguishing between languages using the Devanagari script is much more uncommon than for the Latin script. However, there has been some research already before the Indo-Aryan Language Identification (ILI) shared task at VarDial 2018 [Zampieri, Malmasi, Nakov, Ali, Shuon, Glass, Scherrer, Samardžić, Ljubešić, Tiedemann, van der Lee, Grondelaers, Oostdijk, van den Bosch, Kumar, Lahiri,  JainZampieri et al.2018]. Kruengkrai et al. kruengkrai1 presented results from language identification experiments between ten Indian languages, including four languages written in Devanagari: Sanskrit, Marathi, Magahi, and Hindi. For the ten Indian languages they obtained over 90% accuracy with 70-byte long mystery text sequences. As language identification method, they used SVMs with string kernels. Murthy and Kumar murthy1 compared the use of language models based on bytes with models based on aksharas. Aksharas are the syllables or orthographic units of the Brahmi scripts [Vaid  GuptaVaid  Gupta2002]

. After evaluating the language identification between different pairs of languages, they concluded that the akshara-based models perform better than byte-based. They used multiple linear regression as the classification method.

Sreejith et al. sreejith1 tested language identification with Markovian character and word n-grams from one to three with Hindi and Sanskrit. A character bigram-based language identifier fared the best and managed to gain an accuracy of 99.75% for sentence-sized mystery texts. Indhuja et al. indhuja2 continued the work of Sreejith et al. sreejith1 investigating the language identification between Hindi, Sanskrit, Marathi, Nepali, and Bhojpuri. In a similar fashion, they evaluated the use of Markovian character and word n-grams from one to three. For this set of languages, word unigrams performed the best, obtaining 88% accuracy with the sentence-sized mystery texts.

Bergsma et al.

bergsma1 collected tweets in three languages written with the Devanagari script: Hindi, Marathi, and Nepali. They managed to identify the language of the tweets with 96.2% accuracy using a logistic regression (LR) classifier

[Hosmer, Lemeshow,  SturdivantHosmer et al.2013] with up to 4-grams of characters. Using an additional training corpus, they reached 97.9% accuracy with the A-variant of prediction by partial matching (PPM). Later, Pla and Hurtado pla2 experimented with the corpus of Bergsma et al. bergsma1. Their approach using words weighted with TF-IDF and SVMs reached 97.7% accuracy on the tweets when using only the provided tweet training corpus. Hasimu and Silamu hasimu3 included the same three languages in their test setting. They used a two-stage language identification system where the languages were first identified as a group using Unicode code ranges. In the second stage, the languages written with the Devanagari script were individually identified using SVMs with character bigrams. Their tests resulted in an F1-score of 0.993 within the group of languages using Devanagari with 700 best distinguishing bigrams. Indhuja et al. indhuja2 provided test results for several different combinations of the five languages and for the set of languages used by Hasimu and Silamu hasimu3, they reached 96% accuracy with word unigrams.

Rani et al. RANI18.16 described a language identification system which they used for discriminating between Hindi and Magahi. Their language identifier using lexicons and suffixes of three characters obtained an accuracy of 86.34%. Kumar et al. KUMAR18.26 provided an overview of experiments on an earlier version of the dataset used in the ILI shared task including five closely related Indo-Aryan languages: Awadhi, Bhojpuri, Braj, Hindi, and Magahi. They managed to obtain an accuracy of 96.48% and a macro F1-score of 0.96 on the sentence level. For sentence level language identification, these results are quite good and as such they indicate that the languages, at least in their written form as evidenced by the corpus, are not as closely related as for example the Balkan languages: Croatian, Serbian, and Bosnian.

The results of the first shared task on Indo-Aryan language identification are described by Zampieri et al. vardial2018report. Eight teams submitted results on the task. As for the 2nd edition of the GDI shared task, we participated with an early version of the method described in this article. Again, our submission using a language model adaptation scheme reached a clear first place [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018c]. Seven other teams submitted results on the shared task. The team with the second best results, Tübingen-Oslo, submitted their best results using SVMs [Coltekin, Rama,  BlaschkeColtekin et al.2018]

. In addition to the SVMs, they experimented with Recurrent Neural Networks (RNN) with GRUs and LSTMs but their RNNs never achieved results comparable to the SVMs. The team

ILIdentification used an SVM ensemble [Ciobanu, Zampieri, Malmasi, Pal,  DinuCiobanu et al.2018b]. The best results for the team XAC were achieved using Ridge regression [BarbaresiBarbaresi2018]. In addition to Ridge regression, they experimented with NB and SGD classifiers, which did not perform as well. The team safina used CNNs with direct one-hot encoded vectors, with an embedding layer, as well as with a GRU layer [AliAli2018b]. The team dkosmajac used normalized Euclidean distance. The team we_are_indian used word-level LSTM RNNs in their best submission and statistical n-gram approach with mutual information in their second submission [Gupta, Dhakad, Gupta,  SinghGupta et al.2018]. The team LaMa used NB. We report the F1-scores obtained by the teams in Table 14 together with the results presented in this article.

2.3 Language model adaptation

Even though language model adaptation has not been used in language identification of text in the past, it has been used in other areas of natural language processing. Jelinek

et al. jelinek1 used a dynamic LM and Bacchiani and Roark bacchiani1 used self-adaptation on a test set in speech recognition. Bacchiani and Roark bacchiani1 experimented with iterative adaptation on their language models and noticed that one iteration made the results better but that subsequent iterations made them worse. Zlatkova et al. zlatkova1 used a Logistic Regression classifier in the Style Change Detection shared task [Rangel, Rosso, Montes-y Gómez, Potthast,  SteinRangel et al.2018]. Their winning system fitted their TF-IDF features on the testing data in addition to the training data.

Language model adaptation was used by Chen and Liu chen1 for identifying the language of speech. In the system built by them, the speech is first run through Hidden Markov Model-based phone recognizers (one for each language) which tokenize the speech into sequences of phones. The probabilities of those sequences are calculated using corresponding language models and the most probable language is selected. An adaptation routine is then used so that each of the phonetic transcriptions of the individual speech utterances is used to calculate probabilities for words

, given a word n-gram history of as in Equation 1.

(1)

where is the original probability calculated from the training material, the probability calculated from the data being identified, and the new adapted probability. is the weight given to original probabilities. This adaptation method resulted in decreasing the error rate in three-way identification between Chinese, English, and Russian by 2.88% and 3.84% on an out-of-domain (different channels) data and by 0.44% on in-domain (same channel) data.

Later, also Zhong et al. zhong1 used language model adaptation in language identification of speech. They evaluated three different confidence measures and the best faring measure is defined as follows:

(2)

where is the sequence to be identified, the number of frames in the utterance, the best identified language, and the second best identified language. The two other evaluated confidence measures were clearly inferior. Although the measure performed the best of the individual measures, a Bayesian classifier-based ensemble using all the three measures gave slightly higher results. Zhong et al. zhong1 use the same language adaptation method as Chen and Liu chen1, using the confidence measures to set the for each utterance.

We used an early version of the language model adaptation technique presented in this article in three of the 2018 VarDial workshop shared tasks [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018a, Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018b, Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018c].

The adaptive language identification method presented by Ionescu and Butnaru ionescu2 improved the accuracy from 76.27% to 78.35% on the ADI dataset. In their method, they retrain the language models once by adding 1,000 of the best identified (sorted by the confidence scores produced by their language identification method) unlabelled test samples to the training data.

3 The Methods

In this section, we present the detailed descriptions of the methods used in the experiments. First we describe HeLI 2.0, the language identification method used. Then we present the confidence measures we consider in this article. We conclude this section by describing the language model adaptation method used.

3.1 Language Identification

We use the HeLI method [Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2016] for language identification. The HeLI method has been rivalling SVMs already before the language model adaptation was added, reaching a shared first place in the 2016 Discriminating Similar Languages (DSL) shared task [Malmasi, Zampieri, Ljubešić, Nakov, Ali,  TiedemannMalmasi et al.2016]. The HeLI method is mostly non-discriminative and it is relatively quick to incorporate new material into the LMs of the language identifier. We have made a modification to the method where the original penalty value is replaced with a smoothing value that is calculated from the sizes of the LMs. This modification is needed especially for such cases where the language models grow considerably because of LM adaptation, as the original penalty value was depending on the sizes of the training corpus during the development phase. The penalty modifier is introduced to penalize those languages where features encountered during the identification are absent. The parameter is optimized using the development corpus and in the experiments presented in this article, the optimal value varies between 1.09 and 1.16. The complete formula for the HeLI 2.0 method is presented here and we provide the modified equations for the values used in the LMs in a similar notation as that used by Jauhiainen et al. jauhiainen-linden-jauhiainen:2016:VarDial3.

The method aims to determine the language in which the mystery text has been written, when all languages in the set are known to the language identifier. Each language is represented by several different language models only one of which is used for every word found in the mystery text. The language models for each language are: a model111There can be several models for words, depending on the preprocessing scheme. based on words and one or more models based on character n-grams from one to . The mystery text is processed one word at a time. The word-based models are used first and if an unknown word is encountered in the mystery text, the method backs off to using the character n-grams of the size . If it is not possible to apply the character n-grams of the size , the method backs off to lower order character n-grams and, if needed, continues backing off until character unigrams.

Creating the language models: The training data is preprocessed in different ways to produce different types of language models. The most usual way is to lowercase the text and tokenize it into words using non-alphabetic and non-ideographic characters as delimiters. It is possible to generate several language models for words using different preprocessing schemes, and then use the development material to determine which models and in which back-off order are usable for the current task.

The relative frequencies of the words are calculated. Also the relative frequencies of character n-grams from 1 to are calculated inside the words, so that the preceding and the following space-characters are included222A space character is added to the beginning and the end of each word even if it was not there originally.. The character n-grams are overlapping, so that for example a word with three characters includes three character trigrams. Word n-grams were not used in the experiments of this article, so all subsequent references to n-grams in this article refer to n-grams of characters. After calculating the relative frequencies, we transform those relative frequencies into scores using 10-based logarithms.

The corpus containing only the word tokens in the language models is called . A corpus in language is denoted by . is the set of all words found in the models of any of the languages . For each word , the values for each language are calculated, as in Equation 3.

(3)

where is the number of words and is the total number of all words in language . The parameter is the penalty modifier which is determined empirically using the development set.

The corpus containing the n-grams of the size in the language models is called . The domain is the set of all character n-grams of length found in the models of any of the languages . The values are calculated in the same way for all n-grams for each language , as shown in Equation 4.

(4)

where is the number of n-grams found in the corpus of the language and is the total number of the n-grams of length in the corpus of language . These values are used when scoring the words while identifying the language of a text.

Scoring the text: The mystery text is tokenized into words using the same tokenization scheme as when creating the language models. The words are lowercased when lowercased models are being used. After this, a score is calculated for each word in the mystery text for each language . If the word is found in the set of words , the corresponding value for each language is assigned as the score , as shown in Equation 5.

(5)

If a word is not found in the set of words and the length of the word is at least , the language identifier backs off to using character n-grams of the length . In case the word is shorter than characters, .

When using n-grams, the word is split into overlapping n-grams of characters , of the length . Each of the n-grams is then scored separately for each language in the same way as the words.

If the n-gram is found in , the values in the models are used. If the n-gram is not found in any of the models, it is simply discarded. We define the function for counting n-grams in found in a model in Equation 6.

(6)

When all the n-grams of the size in the word have been processed, the word gets the value of the average of the scored n-grams for each language, as in Equation 7.

(7)

where is the number of n-grams found in the domain . If all of the n-grams of the size were discarded, , the language identifier backs off to using n-grams of the size .

The whole mystery text gets the score equal to the average of the scores of the words for each language , as in Equation 8.

(8)

where is the sequence of words and is the number of words in the mystery text . Since we are using negative logarithms of probabilities, the language having the lowest score is returned as the language with the maximum probability for the mystery text.

3.2 Confidence estimation

In order to be able to select the best candidate for language model adaptation, the language identifier needs to provide a confidence score for the identified language. We evaluated three different confidence measures that seemed applicable to the HeLI 2.0 method.

In the first measure, we estimate the confidence of the identification as the difference between the scores of the best and the second best identified language. Zhong et al. zhong1 call this confidence score and in our case it is calculated using the following equation:

(9)

where is the best scoring language and the second best scoring language.

The second confidence measure, , was presented by Chen and Liu chen1. adapted to our situation is calculated as follows:

(10)

The third measure, , presented by Zhong et al. zhong1, is calculated with the following equation:

(11)

3.3 Language model adaptation algorithm

In the first step of our adaptation algorithm, all the mystery texts in the mystery text collection (for example, a test set) are preliminarily identified using the HeLI 2.0 method. They are subsequently ranked by their confidence scores and the preliminarily identified collection is split into parts . is a number between 1 and the total number of mystery texts, , depending on in how many parts we want to split the mystery text collection.333The only difference between the language adaptation method presented here and the earlier version of the method we used at the shared tasks is that in the shared tasks, the was always equal to . The higher is, the longer the identification of the whole collection will take. The number of finally identified parts is , which in the beginning is . After ranking, the part includes the most confidently identified texts and the least confidently identified texts.

Words and character n-grams up to the length are extracted from each mystery text in and added to the respective language models. Then, all the mystery texts in the part are set as finally identified and is increased by 1.

Then for as long as

, the process is repeated using the newly adapted language models to perform a new preliminary identification for those texts that are not yet finally identified. In the end, all features from all of the mystery texts are included in the language model. This constitutes one epoch of adaptation.

In iterative language model adaptation, the previous algorithm is repeated from the beginning several times.

4 Test setting

We evaluate the methods presented in the previous section using three standard datasets. The first two datasets are from the GDI shared tasks held at VarDials 2017 and 2018. The third dataset is from the ILI shared task held at VarDial 2018.

4.1 GDI 2017 dataset

The dataset used in the GDI 2017 shared task consists of manual transcriptions of speech utterances by speakers from different areas in Switzerland: Bern, Basel, Lucerne, and Zurich. The variety of German spoken in Switzerland is considered to be a separate language (Swiss German, gsw) by the ISO-639-3 standard [Lewis, Simons,  FennigLewis et al.2013] and these four areas correspond to separate varieties of it. The transcriptions in the dataset are written entirely in lowercased letters. Samardžić et al. samardzic2016archimob describe the ArchiMob corpus, which is the source for the shared task dataset. Zampieri et al. zampieri9 describe how the training and test sets were extracted from the ArchiMob corpus for the 2017 shared task. The sizes of the training and test sets can be seen in Table 1. The shared task was a four-way language identification task between the four German dialects present in the training set.

Variety (code) Training Test
Bern (BE) 28,558 7,025
Basel (BS) 28,680 7,064
Lucerne (LU) 28,653 7,509
Zurich (ZH) 28,715 7,949
Table 1: List of the Swiss German varieties used in the datasets distributed for the 2017 GDI shared task. The sizes of the training and the test sets are in words.

4.2 GDI 2018 dataset

The dataset used in the GDI 2018 shared task was similar to the one used in GDI 2017. The sizes of the training, the development, and the test sets can be seen in Table 2. The first track of the shared task was a standard four-way language identification between the four German dialects present in the training set. The GDI 2018 shared task included an additional second track dedicated to unknown dialect detection. The unknown dialect was not included in the training nor the development sets, but it was present in the test set. The test set was identical for both tracks, but the lines containing an unknown dialect were ignored when calculating the results for the first track.

Variety (code) Training Development Test
Bern (BE) 28,558 7,404 12,013
Basel (BS) 27,421 9,544 9,802
Lucerne (LU) 29,441 8,887 11,372
Zurich (ZH) 28,820 8,099 9,610
Unknown dialect (XY) 8,938
Table 2: List of the Swiss German varieties used in the datasets distributed for the 2018 GDI shared task. The sizes of the training, the development, and the test sets are in words.

4.3 ILI 2018 dataset

The dataset used for the ILI 2018 shared task included text in five languages: Bhojpuri, Hindi, Awadhi, Magahi, and Braj. As can be seen in Table 3, there was considerably less training material for the Awadhi language than the other languages. The training corpus for Awadhi had only slightly over 9,000 lines whereas the other languages had around 15,000 lines of text for training. An early version of the dataset, as well as its creation, was described by Kumar et al. KUMAR18.26. The ILI 2018 shared task was an open one, allowing the use of any additional data or means. However, we have not used any external data and our results would be exactly the same on a closed version of the task.

Language (code) Training Development Test
Bhojpuri (BHO) 258,501 56,070 50,206
Hindi (HIN) 325,458 44,215 35,863
Awadhi (AWA) 123,737 19,616 22,984
Magahi (MAG) 234,649 37,809 35,358
Braj (BRA) 249,243 40,023 31,934
Table 3: List of the Indo-Aryan languages used in the datasets distributed for the 2018 ILI shared task. The sizes of the training, the development, and the test sets are in words.

5 Experiments and results

In our experiments, we evaluate the HeLI 2.0 method, the HeLI 2.0 method using language model adaptation, as well as the iterative version of the adaptation. We test all three methods with all of the datasets described in the previous section. First we evaluate the confidence measures using the GDI 2017 dataset and afterwards we use the best performing confidence measure in all further experiments.

We are measuring language identification performance using the macro and the weighted F1-scores. These are the same performance measures that were used in the GDI 2017, GDI 2018, and ILI 2018 shared tasks [Zampieri, Malmasi, Ljubešic, Nakov, Ali, Tiedemann, Scherrer,  AepliZampieri et al.2017, Zampieri, Malmasi, Nakov, Ali, Shuon, Glass, Scherrer, Samardžić, Ljubešić, Tiedemann, van der Lee, Grondelaers, Oostdijk, van den Bosch, Kumar, Lahiri,  JainZampieri et al.2018]. F1-score is calculated using the precision and the recall as in Equation 12.

(12)

The macro F1-score is the average of the individual F1-scores for the languages and the weighted F1-score is similar, but weighted by the number of instances for each language.

5.1 Evaluating the confidence measures

We evaluated the three confidence measures presented in Section 3.2 using the GDI 2017 training data. The results of the evaluation are presented in Table 4. The underlying data for the table consists of pairs of confidence values and the corresponding Boolean values indicating whether the identification results were correct or not. The data has been ordered according to their confidence score for each of the three measures. The first column in the table tells the percentage of examined top scores. The other columns give the average accuracy in that examined portion of identification results for each confidence measure.

The first row tells us that in the 10% of the highest confidence identification results according to the -measure, 98.5% of the performed identifications were correct. The two other confidence measures on the other hand fail to arrange the identification results so that the most confident 10% would be the most accurate 10%. As a whole, this experiment tells us that the -measure is stable and performs well when compared with the other two.

% most
confident accuracy accuracy accuracy
0-10% 98.5% 95.6% 88.3%
10-20% 98.3% 96.3% 91.4%
20-30% 98.2% 96.2% 92.6%
30-40% 98.2% 95.1% 92.1%
40-50% 97.8% 94.6% 92.0%
50-60% 96.9% 94.9% 91.7%
60-70% 96.0% 94.0% 91.5%
70-80% 94.4% 93.2% 91.0%
80-90% 92.4% 91.7% 89.9%
90-100% 89.0% 89.0% 89.0%
Table 4: Average accuracies within the 10% portions when the results are sorted by the confidence scores .

In addition to evaluating each individual confidence measure, Zhong et al. zhong1 evaluated an ensemble combining all of the three measures, gaining somewhat better results than with the otherwise best performing -measure. However, in their experiments the two other measures were much more stable than in ours. We decided to use only the simple and well performing -measure with our LM adaptation algorithm in the following experiments.

5.2 Experiments on the GDI 2017 dataset

5.2.1 Baseline results and parameter estimation

As there was no separate development set provided for the GDI 2017 shared task, we divided the training set into training and development partitions. The last 500 lines from the original training data for each language was used for development. The development partition was then used to find the best parameters for the HeLI 2.0 method using the macro F1-score as the performance measure. The macro F1-score is equal to the weighted F1-score, which was used as a ranking measure in the shared task, when the number of tested instances in each class are equal. On the development set, the best macro F1-score of 0.890 was reached with the language identifier where words being used and . We then used the whole training set to train the LMs. On the test set, the language identifier using the same parameters reached the macro F1-score of 0.659, and the weighted F1-score of 0.639.

5.2.2 Experiments with language model adaptation

First, we determined the best value for the number of splits using the development partition. Table 5 shows the increment of the weighted F1-score with different values of on the development partition using the same parameters with the HeLI 2.0 method as for the baseline. The results with are always equal to the baseline. If is very high the identification becomes computationally costly as the number of identifications grows exponentially in proportion to . The absolute increase of the F1-score on the development partition was 0.01 when using .

k weighted F1-score k weighted F1-score
1 or 2 0.890 30, 35, or 40 0.899
3 0.893 45 0.900
4 0.894 50 0.899
5 0.896 60 0.900
10 or 15 0.899 70, 80, 90,
20 or 25 0.898 100, 150, or 200 0.899
Table 5: The weighted F1-scores obtained by the identifier using LM adaptation with different values of when tested on the development partition of the GDI 2017 dataset.

5.2.3 Experiments with thresholding

We experimented with setting a confidence threshold for the inclusion of new data into the language models. Table 6 shows the results on the development partition. The results show, that there is no confidence score that could be used for thresholding, at least not with the development partition of GDI 2017.

Conf. threshold macro F1-score Conf. threshold macro F1-score
0.00 - 0.01 0.900 0.10 0.898
0.02 0.899 0.16 0.896
0.04 0.898 0.32 0.889
0.06 or 0.08 0.899
Table 6: Weighted F1-scores with confidence threshold for LM adaptation on the development set.

5.2.4 Results of the LM adaptation on the test data

Based on the evaluations using the development partition, we decided to use for the test run. All the training data was used for the initial LM creation. The language identifier using LM adaptation reached the macro F1-score of 0.689 and the weighted F1-score of 0.687 on the test set. The weighted F1-score was 0.048 higher than the one obtained by the non-adaptive version and clearly higher than the other results obtained using the GDI 2017 dataset.

5.2.5 Iterative adaptation

We tested repeating the LM adaptation algorithm for several epochs and the results of those trials on the development partition can be seen in Table 7. The improvement of 0.003 on the original macro F1-score using 13-956 epochs was still considerable. The results seem to indicate that the language models become very stable with repeated adaptation.

Number of epochs Macro F1-score Number of epochs Macro F1-score
1 0.900 13-956 0.903
2-4 0.901 957-999 0.902
5-12 0.902
Table 7: Macro F1-scores with iterative LM adaptation on the development partition.

We decided to try iterative LM adaptation using 485 epochs with the test set. The tests resulted in a weighted F1-score of 0.700, which was a further 0.013 increase on top of the score obtained without additional iterations. We report the weighted F1-scores from the GDI 2017 shared task together with our own results in Table 8. The methods used are listed in the first column, used features in the second column, and the best reached weighted F1-score in the third column. The results from this paper are bolded. The results using other methods (team names are in parentheses) are collected from the shared task report [Zampieri, Malmasi, Ljubešic, Nakov, Ali, Tiedemann, Scherrer,  AepliZampieri et al.2017] as well as from the individual system description articles. The 0.013 point increase obtained with the iterative LM adaptation over the non-iterative version might seem small when compared with the overall increase over the scores of the HeLI 2.0 method, but the increase is still more than the difference between the 1st and 3rd best submitted methods on the original shared task.

Method (Team) Features used wgh. F1
HeLI 2.0 + iterative LM-adapt. ch. n-grams 1-5 and words 0.700
HeLI 2.0 + LM-adapt. ch. n-grams 1-5 and words 0.687
SVM meta-classifier ensemble (MAZA) ch. n-grams 1-6 and words 0.662
SVM, cat. equal. 2 (CECL) BM25 ch. n-grams 1-5 0.661
CRF (CLUZH) ch. n-grams, affixes… 0.653
NB, CRF, and SVM ensemble (CLUZH) ch. n-grams, affixes… 0.653
SVM probability ensemble (MAZA) ch. n-grams 1-6 and words 0.647
SVM + SGD ensemble (qcri_mit) n-grams 1-8 0.639
HeLI 2.0 ch. n-grams 1-5 and words 0.639
SVM, cat. equal. 1 (CECL) BM25 ch. n-grams 1-5 0.638
KRR, sum of 3 kernels (unibuckernel) n-grams 3-6 0.637
KDA, sum of 2 kernels (unibuckernel) n-grams 3-6 0.635
KDA, sum of 3 kernels (unibuckernel) n-grams 3-6 0.634
SVM voting ensemble (MAZA) ch. n-grams 1-6 and words 0.627
Linear SVM (tubasfs) TF-IDF ch. n-grams and words 0.626
SVM (CECL) BM25 ch. n-grams 1-5 0.625
NB (CLUZH) ch. n-grams 2-6 0.616
Cross Entropy (ahaqst) ch. n-grams up to 25 bytes 0.614
Perplexity (Citius_Ixa_Imaxin) words 0.612
Perplexity (Citius_Ixa_Imaxin) ch. 5-7 and word 1-3 n-grams 0.611
Naive Bayes (XAC_Bayesline) TF-IDF 0.605
Perplexity (Citius_Ixa_Imaxin) ch. 7-grams 0.577
Cross Entropy (ahaqst) word n-grams 1-3 0.548
LSTM NN (deepCybErNet) characters or words 0.263
Table 8: The weighted F1-scores using different methods on the 2017 GDI test set. The results from the experiments presented in this article are bolded.

5.3 Experiments on the GDI 2018 dataset

5.3.1 Baseline results and parameter estimation

The GDI 2018 dataset included a separate development set (Table 2). We used the development set to find the best parameters for the HeLI 2.0 method using the macro F1-score as the performance measure. The macro F1-score of 0.659 was obtained by the HeLI 2.0 method using just character n-grams of the size 4 with . The corresponding recall 66.17% was slightly higher than the 66.10% obtained with the HeLI method used in the GDI 2018 shared task. We then used the combined training and the development sets to train the language models. On the test set, the language identifier using these parameters obtained a macro F1-score of 0.650. The HeLI 2.0 method reached 0.011 higher macro F1-score than the HeLI method we used in the shared task. Even without the LM adaptation, the HeLI 2.0 method beats all the other reported methods.

5.3.2 Experiments with language model adaptation

Table 9 shows the increment of the macro F1-score with different values of , the number of parts the examined mystery text collection is split into, on the development set using the same parameters with the HeLI 2.0 method as for the baseline. On the development set, gave the best F1-score, with the absolute increase of 0.116 over the baseline. The corresponding recall was 77.74%, which was somewhat lower than the 77.99% obtained at the shared task.

k Macro F1-score k Macro F1-score
1 0.659 52, 54, or 55 0.774
2 0.719 56 or 57 0.776
4 0.755 58 0.774
8 0.769 60 or 64 0.775
16 0.773 96 or 128 0.774
32, 40, 44, or 46 0.774 256 or 512 0.775
48 0.775 1024, 2048, or 4658 0.774
Table 9: The macro F1-scores gained with different values of when evaluated on the development set.

5.3.3 Results of the LM adaptation on the test set

Based on the evaluations using the development set, we decided to use for the test run. All the training and the development data was used for the initial LM creation. The method using the LM adaptation algorithm reached the macro F1-score of 0.707. This macro F1-score is 0.057 higher than the one obtained by the non-adaptive version and 0.021 higher than results we obtained using language model adaptation in the GDI 2018 shared task.

5.3.4 Iterative adaptation

We tested repeating the LM adaptation algorithm for several epochs and the results of those trials on the GDI 2018 development set can be seen in Table 10. There was a clear improvement of 0.041, at 477-999 epochs, on the original macro F1-score. It would again seem that, the language models become very stable with repeated adaptation, at least when there is no unknown language present in the data which is the case with the development set. Good scores were obtained already at 20 iterations, after which the results started to fluctuate up and down.

Number of epochs Macro F1-score Number of epochs Macro F1-score
1 0.776 17-19 0.813
2 0.787 20 0.814
3 0.792 21 0.813
4 0.797 22-33 0.814
5 0.800 34-54 0.815
6 0.801 55-82 0.816
7 0.804 83-88 0.815
8 0.806 89-94 0.816
9 0.807 95-111 0.815
10 0.808 112-122 0.816
11 0.809 123-129 0.815
12-13 0.810 130-476 0.816
14 0.811 477-999 0.817
15-16 0.812
Table 10: Macro F1-scores with iterative LM adaptation on the GDI 2018 development set.

Based on the results on the development set, we decided to try two different counts of iterations: 738, which is the number of epochs in the middle of the best scores, and 20, after which the results started to fluctuate. The tests resulted in a macro F1-score of 0.696 for 738 epochs and 0.704 for 20 epochs. As an additional experiment, we evaluated the iterative adaptation on a test set, from which the unknown dialects had been removed and obtained an F1-score of 0.729 with 738 epochs. From the results, it is clear that the presence of the unknown language is detrimental to repeated language model adaptation. In Table 11, we report the macro F1-scores obtained by the teams participating in the GDI 2018 shared task, as well as our own. The methods used are listed in the first column, used features in the second column, and the best reached macro F1-score in the third column.

Method (Team) Features used F1
HeLI 2.0 with LM adapt. ch. 4-grams 0.707
HeLI 2.0 with iter. LM adapt. ch. 4-grams 0.696
HeLI with LM adapt. (SUKI) ch. 4-grams 0.686
HeLI 2.0 ch. 4-grams 0.650
SVM ensemble (Twist Bytes) ch. and word n-grams 1-7 0.646
CNN with GRU (safina) characters 0.645
SVMs (Tübingen-Oslo) ch. n-grams 1-6, word n-grams 1-3 0.640
HeLI (SUKI) ch. 4-grams 0.639
Voting ensemble (LaMa) ch. n-grams 1-8, word n-grams 1-6 0.637
Naïve Bayes (XAC) TF-IDF ch. n-grams 1-6 0.634
Ridge regression (XAC) TF-IDF ch. n-grams 1-6 0.630
SGD (XAC) TF-IDF ch. n-grams 1-6 0.630
CNN (safina) characters 0.645
SVM ensemble (GDI_classification) ch. n-grams 2-5 0.620
CNN with embedding (safina) characters 0.645
RNN with LSTM (Tübingen-Oslo) 0.616
Euclidean distance (dkosmajac) ch. n-grams 0.591
Table 11: The macro F1-scores using different methods on the 2018 GDI test set. The results from the experiments presented in this article are bolded.

5.4 Experiments on the ILI 2018 dataset

5.4.1 Baseline results and parameter estimation

We used the development set to find the best parameters for the HeLI 2.0 method using the macro F1-score as the measure. Using both original and lowercased character n-grams from one to six with , the method obtained the macro F1-score of 0.954. The corresponding recall was 95.26%, which was exactly the same we obtained with the HeLI method used in the ILI 2018 shared task. We then used the combined training and the development sets to train the language models. On the test set, the language identifier using the above parameters obtained a macro F1-score of 0.880, which was clearly lower than the score we obtained using the HeLI method in the shared task.

5.4.2 Experiments with language model adaptation

Table 12 shows the increment of the macro F1-score with different values of on the development set using the same parameters with the HeLI 2.0 method as for the baseline. On the development set, gave the best F1-score, 0.964, which is an absolute increase of 0.010 on the original F1-score. The corresponding recall was 96.29%, which was a bit better than the 96.22% obtained in the shared task.

k Macro F1-score k Macro F1-score
1 0.954 32 or 48 0.964
2 0.958 58 0.963
4 0.960 60 or 62 - 0.964
8 or 16 0.963
Table 12: The macro F1-scores gained with different values of when tested on the ILI 2018 development set.

5.4.3 Results of the LM adaptation on the test data

Based on the evaluations using the development data, we decided to use as the number of splits for the actual test run. All the training and the development data was used for the initial LM creation. The identifier using the LM adaptation algorithm obtained a macro F1-score of 0.955. This macro F1-score is basically the same we obtained with language model adaptation in the ILI 2018 shared task, only some small fractions lower.

5.4.4 Iterative adaptation

We experimented repeating the LM adaptation algorithm for several epochs and the results of those trials on the development set can be seen in Table 13. There was a a very small improvement of 0.001 on the original macro F1-score. The best absolute F-scores were reached at epochs 17 and 18. It would again seem that the language models become very stable with repeated adaptation.

Number of epochs Macro F1-score Number of epochs Macro F1-score
1 0.964 2 - 999 0.965
Table 13: Macro F1-scores with iterative LM adaptation on the ILI 2018 development set.

Based on the results on the development set, we decided to use LM adaptation with 18 iterations on the test set. The test resulted in a macro F1-score of 0.958, which is again almost the same as in the shared task, though this time some small fractions higher. We report the F1-scores obtained by the different teams participating in the ILI 2018 shared task in Table 14, with the results form this article in bold. The methods used are listed in the first column, used features in the second column, and the macro F1-scores in the third column.

Method (Team) Features used F1
HeLI 2.0 with iter. LM adapt. ch. n-grams 1-6 0.958
HeLI with iter. LM adapt. (SUKI) ch. n-grams 1-6 0.958
HeLI with LM adapt. (SUKI) ch. n-grams 1-6 0.955
HeLI 2.0 with LM adapt. ch. n-grams 1-6 0.955
SVM (Tübingen-Oslo) ch. n-grams 1-6, word n-grams 1-3 0.902
Ridge regression (XAC) ch. n-grams 2-6 0.898
SVM ensemble (ILIdentification) ch. n-grams 2-4 0.889
HeLI (SUKI) ch. n-grams 1-6 0.887
SGD (XAC) ch. n-grams 2-6 0.883
HeLI 2.0 ch. n-grams 1-6 0.880
CNN (safina) characters 0.863
NB (XAC) ch. n-grams 2-6 0.854
Euclidean distance (dkosmajac) 0.847
CNN with embedding (safina) characters 0.863
LSTM RNN (we_are_indian) words 0.836
CNN with GRU (safina) characters 0.826
NB (LaMa) 0.819
RNN with GRU (Tübingen-Oslo) 0.753
Mutual information (we_are_indian) 0.744
Table 14: The macro F1-scores using different methods on the 2018 ILI test set. The results presented for the first time are in bold.

6 Discussion

The 26% difference in F1-scores between the development portion (0.890) and the test set (0.659) of the GDI 2017 data obtained by the HeLI 2.0 method is considerable. It seems to indicate that the test set contains more out-of-domain material when compared with the partition of the training set we used for development. In order to validate this hypothesis, we divided the test set into two parts. The 2nd part remained to be used for testing in four scenarios with the HeLI 2.0 method. In the scenarios we used different combinations of data for training: the original training set, the training set augmented with the first part of test data, the training set of which a part was replaced by the first part of the test set, and only using the first part of the test set. The results of these experiments support our hypothesis, as can be seen in Table 15. The domain difference between the two sets explains why iterative adaptation performs better with the test set than with the development set. After each iteration, the relative amount of the original training data gets smaller, as the information from the test data is repeatedly added to the language models.

Data used for the language models Macro F1
training set 0.656
training set + 1st part of test set 0.801
part of training set replaced with 1st part of test set 0.803
1st part of test set 0.858
Table 15: The macro F1-scores for the second part of test set using different training data combinations.

In the GDI 2018 dataset, there is only a 1.4% difference between the macro F1-scores obtained from the development and the test sets. This indicates that the GDI 2018 development set is in the same way out-of-domain when compared with the training set as the actual test set is.

There is a small difference (7.8%) between the F1-scores attained using the development set and the test set of the ILI 2018 data as well. However, such small differences can be partly due to the fact that the parameters of the identification method have been optimized using the development set.

7 Conclusions

The results indicate that unsupervised LM adaptation should be considered in all language identification tasks, especially in those where the amount of out-of-domain data is significant. If the presence of unseen languages is to be expected, the use of language model adaptation could still be beneficial, but special care must be taken as repeated adaptation in particular could decrease the identification accuracy.

Though the iterative LM adaptation is computationally costly when compared with the baseline HeLI 2.0 method, it must be noted that the final identifications with 485 epochs on the GDI 2017 test set took only around 20 minutes using one computing core of a modern laptop.

Acknowledgments

This research was partly conducted with funding from the Kone Foundation Language Programme [Kone FoundationKone Foundation2012].

References

  • [AliAli2018a] Ali, M. 2018a. Character level convolutional neural network for german dialect identification  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018),  172–177.
  • [AliAli2018b] Ali, M. 2018b. Character level convolutional neural network for indo-aryan language identification  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018),  283–287.
  • [Bacchiani  RoarkBacchiani  Roark2003] Bacchiani, M.  Roark, B. 2003. Unsupervised language model adaptation  In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03).,  1,  I–I. IEEE.
  • [BarbaresiBarbaresi2017] Barbaresi, A. 2017. Discriminating between similar languages using weighted subword features  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  184–189, Valencia, Spain.
  • [BarbaresiBarbaresi2018] Barbaresi, A. 2018. Computationally efficient discrimination between language varieties with large feature vectors and regularized classifiers  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018),  164–171.
  • [Benites, Grubenmann, von Däniken, von Grünigen, Deriu,  CieliebakBenites et al.2018] Benites, F., Grubenmann, R., von Däniken, P., von Grünigen, D., Deriu, J.,  Cieliebak, M. 2018. Twist bytes-german dialect identification with data mining optimization  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018),  218–227.
  • [Bergsma, McNamee, Bagdouri, Fink,  WilsonBergsma et al.2012] Bergsma, S., McNamee, P., Bagdouri, M., Fink, C.,  Wilson, T. 2012. Language Identification for Creating Language-specific Twitter Collections  In Proceedings of the Second Workshop on Language in Social Media (LSM2012),  65–74, Montréal, Canada.
  • [BestgenBestgen2017] Bestgen, Y. 2017. Improving the character ngram model for the dsl task with bm25 weighting and less frequently used feature sets  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  115–123, Valencia, Spain.
  • [Blodgett, Wei,  O’ConnorBlodgett et al.2017] Blodgett, S. L., Wei, J. T.-Z.,  O’Connor, B. 2017. A Dataset and Classifier for Recognizing Social Media English 

    In Proceedings of the 3rd Workshop on Noisy User-generated Text,  56–61, Copenhagen, Denmark.

  • [BrownBrown2012] Brown, R. D. 2012. Finding and Identifying Text in 900+ Languages  Digital Investigation, 9, S34–S43.
  • [BrownBrown2013] Brown, R. D. 2013.

    Selecting and Weighting N-grams to Identify 1100 Languages 

    In Proceedings of the 16th International Conference on Text, Speech and Dialogue (TSD 2013),  475–483, Plzeň, Czech Republic.
  • [BrownBrown2014] Brown, R. D. 2014. Non-linear Mapping for Improved Identification of 1300+ Languages  In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014),  627–632, Doha, Qatar.
  • [Chen  MaisonChen  Maison2003] Chen, S. F.  Maison, B. 2003. Using Place Name Data to Train Language Identification Models  In 8th European Conference on Speech Communication and Technology EUROSPEECH 2003 - INTERSPEECH 2003,  1349–1352, Geneva, Switzerland.
  • [Ciobanu, Malmasi,  DinuCiobanu et al.2018a] Ciobanu, A. M., Malmasi, S.,  Dinu, L. P. 2018a. German dialect identification using classifier ensembles  arXiv preprint arXiv:1807.08230.
  • [Ciobanu, Zampieri, Malmasi, Pal,  DinuCiobanu et al.2018b] Ciobanu, A. M., Zampieri, M., Malmasi, S., Pal, S.,  Dinu, L. P. 2018b. Discriminating between indo-aryan languages using svm ensembles  arXiv preprint arXiv:1807.03108.
  • [Clematide  MakarovClematide  Makarov2017] Clematide, S.  Makarov, P. 2017. Cluzh at vardial gdi 2017: Testing a variety of machine learning tools for the classification of swiss german dialects  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  170–177, Valencia, Spain.
  • [Coltekin  RamaColtekin  Rama2017] Coltekin, C.  Rama, T. 2017. Tübingen System in VarDial 2017 Shared Task: Experiments with Language Identification and Cross-lingual Parsing  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects,  146–155, Valencia, Spain.
  • [Coltekin, Rama,  BlaschkeColtekin et al.2018] Coltekin, C., Rama, T.,  Blaschke, V. 2018. Tübingen-oslo team at the vardial 2018 evaluation campaign: An analysis of n-gram features in language variety identification  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018),  55–65.
  • [Gamallo, Pichel,  AlegriaGamallo et al.2017] Gamallo, P., Pichel, J. R.,  Alegria, I. n. 2017. A perplexity-based method for similar languages discrimination  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  109–114, Valencia, Spain.
  • [Gupta, Dhakad, Gupta,  SinghGupta et al.2018] Gupta, D., Dhakad, G., Gupta, J.,  Singh, A. K. 2018. Iit (bhu) system for indo-aryan language identification (ili) at vardial 2018  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018),  185–190.
  • [Hanani, Qaroush,  TaylorHanani et al.2017] Hanani, A., Qaroush, A.,  Taylor, S. 2017. Identifying dialects with textual and acoustic cues  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  93–101, Valencia, Spain.
  • [Hasimu  SilamuHasimu  Silamu2018] Hasimu, M.  Silamu, W. 2018. On Hierarchical Text Language-Identification Algorithms  Algorithms, 11(39).
  • [Hollenstein  AepliHollenstein  Aepli2015] Hollenstein, N.  Aepli, N. 2015. A Resource for Natural Language Processing of Swiss German Dialects  In Proceedings of GSCL,  108–109.
  • [Hosmer, Lemeshow,  SturdivantHosmer et al.2013] Hosmer, D. W., Lemeshow, S.,  Sturdivant, R. X. 2013. Applied logistic regression (3rd ed ). Wiley Series in Probability and Statistics. Wiley, Hoboken, N.J., USA.
  • [Indhuja, Indu, Sreejith,  Reghu RajIndhuja et al.2014] Indhuja, K., Indu, M., Sreejith, C.,  Reghu Raj, P. C. 2014. Text Based Language Identification System for Indian Languages Following Devanagiri Script  International Journal of Engineering Reseach and Technology, 3(4), 327–331.
  • [IonescuIonescu2013] Ionescu, R. T. 2013. Local rank distance  In Björner, N., Negru, V., Ida, T., Jebelean, T., Petcu, D., Watt, S.,  Zaharie, D., Proceedings of the 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2013),  219–226, Timisoara, Romania.
  • [Ionescu  ButnaruIonescu  Butnaru2017] Ionescu, R. T.  Butnaru, A. 2017. Learning to identify arabic and german dialects using multiple kernels  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  200–209, Valencia, Spain.
  • [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2015] Jauhiainen, H., Jauhiainen, T.,  Lindén, K. 2015. The Finno-Ugric Languages and The Internet Project  Septentrio Conference Series, 0(2), 87–98.
  • [JauhiainenJauhiainen2010] Jauhiainen, T. 2010. Tekstin kielen automaattinen tunnistaminen  Master’s thesis, University of Helsinki, Helsinki.
  • [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2015] Jauhiainen, T., Jauhiainen, H.,  Lindén, K. 2015. Discriminating similar languages with token-based backoff  In Proceedings of the Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialects (LT4VarDial),  44–51, Hissar, Bulgaria.
  • [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018a] Jauhiainen, T., Jauhiainen, H.,  Lindén, K. 2018a. HeLI-based Experiments in Discriminating Between Dutch and Flemish Subtitles  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  137–144, Santa Fe, NM.
  • [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018b] Jauhiainen, T., Jauhiainen, H.,  Lindén, K. 2018b. HeLI-based Experiments in Swiss German Dialect Identification  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  254–262, Santa Fe, NM.
  • [Jauhiainen, Jauhiainen,  LindénJauhiainen et al.2018c] Jauhiainen, T., Jauhiainen, H.,  Lindén, K. 2018c. Iterative Language Model Adaptation for Indo-Aryan Language Identification  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  66–75, Santa Fe, NM.
  • [Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2015] Jauhiainen, T., Lindén, K.,  Jauhiainen, H. 2015. Language Set Identification in Noisy Synthetic Multilingual Documents  In Proceedings of the Computational Linguistics and Intelligent Text Processing 16th International Conference, CICLing 2015,  633–643, Cairo, Egypt.
  • [Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2016] Jauhiainen, T., Lindén, K.,  Jauhiainen, H. 2016. HeLI, a Word-Based Backoff Method for Language Identification  In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3),  153–162, Osaka, Japan.
  • [Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2017a] Jauhiainen, T., Lindén, K.,  Jauhiainen, H. 2017a. Evaluating heli with non-linear mappings  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  102–108, Valencia, Spain.
  • [Jauhiainen, Lindén,  JauhiainenJauhiainen et al.2017b] Jauhiainen, T., Lindén, K.,  Jauhiainen, H. 2017b. Evaluation of Language Identification Methods Using 285 Languages  In Proceedings of the 21st Nordic Conference on Computational Linguistics (NoDaLiDa 2017),  183–191, Gothenburg, Sweden. Linköping University Electronic Press.
  • [Jauhiainen, Lui, Zampieri, Baldwin,  LindénJauhiainen et al.2018] Jauhiainen, T., Lui, M., Zampieri, M., Baldwin, T.,  Lindén, K. 2018. Automatic language identification in texts: A survey  arXiv preprint arXiv:1804.08186.
  • [Jelinek, Merialdo, Roukos,  StraussJelinek et al.1991] Jelinek, F., Merialdo, B., Roukos, S.,  Strauss, M. 1991. A dynamic language model for speech recognition  In Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991.
  • [Kone FoundationKone Foundation2012] Kone Foundation 2012. The language programme 2012-2016  http://www.koneensaatio.fi/en.
  • [Kruengkrai, Sornlertlamvanich,  IsaharaKruengkrai et al.2006] Kruengkrai, C., Sornlertlamvanich, V.,  Isahara, H. 2006. Language, Script, and Encoding Identification with String Kernel Classifiers  In Proceedings of the 1st International Conference on Knowledge, Information and Creativity Support Systems (KICSS 2006), Ayutthaya, Thailand.
  • [Kumar, Lahiri, Alok, Ojha, Jain, Basit,  DawarKumar et al.2018] Kumar, R., Lahiri, B., Alok, D., Ojha, A. K., Jain, M., Basit, A.,  Dawar, Y. 2018. Automatic Identification of Closely-related Indian Languages: Resources and Experiments  In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC).
  • [Lewis, Simons,  FennigLewis et al.2013] Lewis, M. P., Simons, G. F.,  Fennig, C. D.. 2013. Ethnologue: Languages of the world, seventeenth edition. SIL International, Dallas, Texas.
  • [Li, Cohn,  BaldwinLi et al.2018] Li, Y., Cohn, T.,  Baldwin, T. 2018. What’s in a domain? learning domain-robust text representations using adversarial training  In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics — Human Language Technologies (NAACL HLT 2018),  474–479, New Orleans, USA.
  • [Malmasi  ZampieriMalmasi  Zampieri2017] Malmasi, S.  Zampieri, M. 2017. German dialect identification in interview transcriptions  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial),  164–169, Valencia, Spain.
  • [Malmasi, Zampieri, Ljubešić, Nakov, Ali,  TiedemannMalmasi et al.2016] Malmasi, S., Zampieri, M., Ljubešić, N., Nakov, P., Ali, A.,  Tiedemann, J. 2016. Discriminating Between Similar Languages and Arabic Dialect Identification: A Report on the Third DSL Shared Task  In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects,  1–14, Osaka, Japan.
  • [Medvedeva, Kroon,  PlankMedvedeva et al.2017] Medvedeva, M., Kroon, M.,  Plank, B. 2017. When Sparse Traditional Models Outperform Dense Neural Networks: the Curious Case of Discriminating between Similar Languages  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects,  156–163, Valencia, Spain.
  • [Murthy  KumarMurthy  Kumar2006] Murthy, K. N.  Kumar, G. B. 2006. Language Identification from Small Text Samples  Journal of Quantitative Linguistics, 13(1), 57–80.
  • [MustonenMustonen1965] Mustonen, S. 1965. Multiple Discriminant Analysis in Linguistic Problems  Statistical Methods in Linguistics, 4, 37–44.
  • [Pla  HurtadoPla  Hurtado2017] Pla, F.  Hurtado, L.-F. 2017. Language Identification of Multilingual Posts from Twitter: A Case Study  Knowledge and Information Systems, 51(3), 965–989.
  • [Rangel, Rosso, Montes-y Gómez, Potthast,  SteinRangel et al.2018] Rangel, F., Rosso, P., Montes-y Gómez, M., Potthast, M.,  Stein, B. 2018. Overview of the 6th author profiling task at pan 2018: multimodal gender identification in twitter  Working Notes Papers of the CLEF.
  • [Rani, Ojha,  JhaRani et al.2018] Rani, P., Ojha, A. K.,  Jha, G. N. 2018. Automatic language identification system for hindi and magahi  arXiv preprint arXiv:1804.05095.
  • [Samardžić, Scherrer,  GlaserSamardžić et al.2016] Samardžić, T., Scherrer, Y.,  Glaser, E. 2016. ArchiMob – a corpus of spoken Swiss German  In Proceedings of LREC.
  • [Scherrer  RambowScherrer  Rambow2010] Scherrer, Y.  Rambow, O. 2010. Word-based Dialect Identification with Georeferenced Rules  In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010),  1151–1161, Massachusetts, USA. Association for Computational Linguistics.
  • [Sibun  ReynarSibun  Reynar1996] Sibun, P.  Reynar, J. C. 1996. Language Identification: Examining the Issues  In Proceedings of the 5th Annual Symposium on Document Analysis and Information Retrieval (SDAIR-96),  125–135, Las Vegas, USA.
  • [Sreejith, Indu,  Reghu RajSreejith et al.2013] Sreejith, C., Indu, M.,  Reghu Raj, P. C. 2013. N-gram based Algorithm for Distinguishing Between Hindi and Sanskrit Texts  In Proceedings of the Fourth IEEE International Conference on Computing, Communication and Networking Technologies, Tiruchengode, India.
  • [Tiedemann  LjubešićTiedemann  Ljubešić2012] Tiedemann, J.  Ljubešić, N. 2012. Efficient Discrimination Between Closely Related Languages  In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012),  2619–2634, Mumbai, India.
  • [Vaid  GuptaVaid  Gupta2002] Vaid, J.  Gupta, A. 2002. Exploring word recognition in a semi-alphabetic script: The case of devanagari  Brain and Language, 81(1-3), 679–690.
  • [Vatanen, Väyrynen,  VirpiojaVatanen et al.2010] Vatanen, T., Väyrynen, J. J.,  Virpioja, S. 2010. Language Identification of Short Text Segments with N-gram Models  In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010),  3423–3430, Valletta, Malta.
  • [Zampieri, Malmasi, Ljubešic, Nakov, Ali, Tiedemann, Scherrer,  AepliZampieri et al.2017] Zampieri, M., Malmasi, S., Ljubešic, N., Nakov, P., Ali, A., Tiedemann, J., Scherrer, Y.,  Aepli, N. 2017. Findings of the VarDial Evaluation Campaign 2017  In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects,  1–15, Valencia, Spain.
  • [Zampieri, Malmasi, Nakov, Ali, Shuon, Glass, Scherrer, Samardžić, Ljubešić, Tiedemann, van der Lee, Grondelaers, Oostdijk, van den Bosch, Kumar, Lahiri,  JainZampieri et al.2018] Zampieri, M., Malmasi, S., Nakov, P., Ali, A., Shuon, S., Glass, J., Scherrer, Y., Samardžić, T., Ljubešić, N., Tiedemann, J., van der Lee, C., Grondelaers, S., Oostdijk, N., van den Bosch, A., Kumar, R., Lahiri, B.,  Jain, M. 2018. Language Identification and Morphosyntactic Tagging: The Second VarDial Evaluation Campaign  In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), Santa Fe, USA.
  • [Zampieri, Tan, Ljubešić,  TiedemannZampieri et al.2014] Zampieri, M., Tan, L., Ljubešić, N.,  Tiedemann, J. 2014. A Report on the DSL Shared Task 2014  In Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects,  58–67, Dublin, Ireland.
  • [Zampieri, Tan, Ljubešić, Tiedemann,  NakovZampieri et al.2015] Zampieri, M., Tan, L., Ljubešić, N., Tiedemann, J.,  Nakov, P. 2015. Overview of the DSL Shared Task 2015  In Proceedings of the Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialects (LT4VarDial),  1–9, Hissar, Bulgaria.
  • [Zhong, Chen, Zhu,  LiuZhong et al.2007] Zhong, S., Chen, Y., Zhu, C.,  Liu, J. 2007. Confidence measure based incremental adaptation for online language identification  In Proceedings of International Conference on Human-Computer Interaction (HCI 2007),  535–543, Beijing, China.
  • [Zlatkova, Kopev, Mitov, Atanasov, Hardalov, Koychev,  NakovZlatkova et al.] Zlatkova, D., Kopev, D., Mitov, K., Atanasov, A., Hardalov, M., Koychev, I.,  Nakov, P. An ensemble-rich multi-aspect approach for robust style change detection.