Learning multilingual sentence representations  is a key technique for building NLP applications with multilingual support. A primary advantage of multilingual sentence embeddings is that they enable us to train a single classifier based on a single language (e.g., English) and then apply it to other languages without using training models for those languages (e.g., German.) Furthermore, recent advances in multilingual sentence embedding techniques [30, 6] have shown to exhibit competitive performance on several downstream NLP tasks, compared to the two-stage approach that relies on machine translation followed by monolingual sentence embedding techniques.
The main challenge of multilingual sentence embeddings is that they are sensitive to textual similarity (textual similarity bias) which negatively affects the the semantic similarity of sentence embeddings . The following example illustrates this point:
S1: What time is the pool open tonight?
S2: What time are the stores on 5th open tonight?
S3: When does the pool open this evening?
S1 and S3 have similar intents. They ask for the opening hours of the pool in the evening. S2 has a different intent: it asks about the opening hour of stores. We expect embeddings of sentences of the same intent to be closer (e.g., to have higher cosine similarity) to one another than embeddings of sentences with different intents.
We tested several pre-trained (multilingual) sentence embedding models [24, 8, 30, 6] in both monolingual and cross-lingual settings. Somewhat surprisingly, every model provided lower similarity scores between S1 and S3 (compared to S1 and S2, or S2 and S3). This is mainly because S1 and S2 are more textually similar (because both sentences contain “what time” and “tonight”) compared to S1 and S3. This example highlights that general-purpose multilingual sentence embeddings exhibit textual similarity bias, which is a fundamental limitation as they may not correctly capture the semantic similarity of sentences.
Motivated by the need for sentence embeddings that better reflect the semantics of sentence, we examine multilingual semantic specialization, which tailors pre-trained multilingual sentence embeddings to handle semantic similarity. Although prior work has developed semantic specialization methods for word embeddings  and semantic and linguistic properties of sentence embeddings [36, 9], no prior work has considered semantic specialization of multilingual sentence embeddings.
In this paper, we develop a “lightweight” approach for semantic specialization of multilingual embeddings that can be applied to any base model. Our approach fine-tunes a pre-trained multilingual sentence embedding model based on a classification task that considers semantic similarity. This aligns with common techniques of pre-training methods for NLP [19, 27, 11]
. We explore several loss functions to determine which is appropriate for the semantic specialization of cross-lingual sentence embeddings. We found that naive choices of loss functions such as the softmax loss, which is a common choice for classification, may suffer from significant degradation of the original multilingual sentence embedding model.
We also design Emu to specialize multilingual sentence embeddings using only monolingual training data as it is expensive to collect parallel training data in multiple languages. Our solution incorporates language adversarial training to enhance the multilinguality of sentence embeddings. Specifically, we implemented a language discriminator that tries to identify the language of an input sentence given its embedding and optimizes multilingual sentence embeddings to confuse the language discriminator.
We conducted experiments on three cross-lingual intent classification tasks that involves 6 languages. The results show that Emu successfully specializes the state-of-the-art multilingual sentence embedding techniques, namely LASER, using only monolingual training data with unlabeled data in other languages. It outperforms the original LASER model and monolingual sentence embeddings with machine translation by up to 47.7% and 86.2% respectively.
The contributions of the paper are as follows:
We developed Emu, a system that semantically enhances pre-trained multilingual sentence embeddings. Emu incorporates multilingual adversarial training on top of fine-tuning to enhance multilinguality without using parallel sentences.
We experimented with several loss functions and show that the two loss functions, namely constrained softmax and center loss, outperform common loss functions used for fine-tuning.
We show that Emu successfully specializes multilingual sentence embedding using only monolingual labeled data.
Multilingual Semantic Specialization
The architecture of Emu is depicted in Figure 1. There are three main components, which we detail next: multilingual encoder , semantic classifier , and language discriminator . The solid lines show the flow of the forward propagation for fine-tuning and , and the dotted lines are that for
. These arrows become reversed during the backpropagation. The semantic classifier and language discriminator are only used for fine-tuning.
After fine-tuning, Emu uses the fine-tuned multilingual encoder to obtain sentence embeddings for input sentences. More specifically, we expect the similarity (e.g., cosine similarity) between two related sentences in any languages to be closer to each other. We consider cosine similarity as it is the most common choice and can be calculated efficiently .
A multilingual encoder is a language-agnostic sentence encoder that converts sentences in any language into embedding vectors in a common space.Emu is flexible with the choice of multilingual encoders and their architectures. The only requirement of this component is that it encodes a sentence in any language into a sentence embedding.
In this paper, we use LASER 
as a base multilingual sentence embedding model. LASER is a multilingual sentence embedding model that covers more than 93 languages with more than 23 different alphabets. It is an encoder-decoder model that shares the same BiLSTM encoder with max-pooling and uses BPE to accept sentences in any languages as input. The model is trained on a set of bilingual translation tasks and is shown to have the state-of-the-art performance on cross-lingual NLP tasks including bitext mining. We use LASER instead of multilingual models for BERT  because (1) LASER outperformed the BERT model on the XNLI task  and (2) a LASER model can be used as a sentence encoder without any change111A BERT model needs to be fine-tuned to use the first vector corresponding to [CLS] as a sentence embedding..
The semantic classifier categorizes input sentences into groups that share the same intent, such as “seeking pool information” or “seeking restaurant information”. We expect the semantic classifier to enhance multilingual sentence embeddings to better reflect the semantic similarity of related sentences, where the semantic similarity is calculated as the cosine similarity between the embeddings of the two sentences.
Additionally, we expect that learned embeddings retain semantic similarity with respect to cosine similarity. Thus, we propose the use of -constrained softmax loss  and center loss , which are known to be effective for image recognition tasks. To the best of our knowledge, we are the first to apply these loss functions for fine-tuning embedding models. We describe these loss functions next.
-constrained softmax loss -constrained softmax loss  considers hard constraints on the norm of embedding vectors on top of the softmax loss:
where denotes the number of classes, and and are -th sentence embedding vector and its true label respectively.
The constraint ensures that embedding vectors are distributed on the hypersphere with the size of Therefore, the Euclidean distance between two vectors on the hypersphere is approximately close to its cosine distance. This property is helpful for specializing sentence embeddings to learn semantic similarity in the form of cosine similarity. Note that this -constraint is different from the regularization term applied to the weight parameters of the output layer. In that case, the regularization term will be considered in the loss function.
To implement -constrained softmax loss, the model additionally inserts an -normalized layer that normalizes the encoder output (i.e., ) followed by a layer that scales with a hyper-parameter . The scaled vectors are then fed into the output layer, where the model evaluates the softmax loss.
Center loss The center loss 
where denotes the centroid of sentence embedding vectors of class . The loss function forces the embedding vector of -th sample toward the centroid of the true category. Our motivation to use this loss function is to enhance the intra-class compactness of sentence embeddings. That is, we want to ensure that the sentence embeddings that have the same intent form compact clusters because other loss functions, such as the softmax loss, does not have this functionality. The center loss works as cross-lingual center loss; it enforces sentences, in any language, that belong to the same intent as a same cluster if multilingual training data are available.
We consider combining the center loss with another function with a hyper-parameter :
where denotes the -constrained softmax loss function.
The semantic classifier does not directly consider multilinguality, so the model, which is fine-tuned on a single language, may now perform worse on other languages. To avoid this problem, we incorporate multilingual adversarial learning into the framework. Specifically, the language discriminator aims to identify the language of an input sentence given its embedding, whereas the multilingual sentence encoder incorporates an additional loss function to “confuse” . The idea was inspired by related work that used adversarial learning for multilingual NLP models [5, 4]. We hypothesize and our experiments show that incorporating adversarial learning also enhances the multilinguality of sentence embeddings.
The language discriminator is trained to determine whether the languages of two input embeddings are different. Simultaneously, the other part of the model is trained to confuse the discriminator. In our implementation, we use Wasserstein GAN  because it is known to be more robust than the original GAN .
Algorithm 1 shows a single training step of Emu. Each step consists of two training routines for language discriminator and the other components (multilingual sentence encoder and semantic classifier ). Target language denotes the language used for training (e.g., English). is randomly chosen from a training language set if multiple languages are used for training. Adversarial languages is a set of languages that are used to retrieve adversarial sentences. To train language discriminator , training sentences in language and adversarial sentences from randomly chosen language are used to evaluate . Formally, the loss function for any training language is described as
where is the cross entropy loss, and are embedding vectors (encoded by ) of sentences in language and language (). Our design implements a language discriminator for each training language . For instance, language discriminator aims to predict whether an input multilingual sentence embedding belongs to English.
Next, labeled sentences in language and adversarial sentences are sampled to update the parameters of and with the fixed parameters of . The overall loss function now takes into account the loss value of so that the multilingual encoder can generate multilingual sentences embeddings for sentences in languages and , which cannot be classified by the language discriminator . We use hyper-parameter to balance the loss functions:
|# of classes||28||13||50|
|# of training data||676||1,195||1,059|
|# of test data||144||252||353|
|Vocab. size (en)||977||626||1,308|
We evaluated Emu based on the cross-lingual intent classification task. The task is to detect the intent of an input sentence in a source language (e.g., German) based on labeled sentences associated with intent labels in a target language (e.g., English.) We consider similarity-based intent detection, which categorizes an input sentence based on the label of the nearest neighbor sentence that has the highest cosine similarity against the input sentence. We adopted this evaluation method since it is widely used in search-based QA systems  and works robustly especially if training data are sparse. An intuitive alternative for intent detection is to directly use the trained semantic classifier (see Figure 1). We evaluated the classification results using the semantic classifier but the performance was poor. Therefore, we excluded the results from the tables.
respectively based on the binomial proportion confidence intervals of Acc@1 values against the baseline methods.
|En + De||+37.5%||+58.5%||+27.1%||+50.8%||+17.5%||+23.3%||+9.6%||+12.7%||+14.8%|
|En + Fr||+40.0%||+60.0%||+50.0%||+46.0%||+12.5%||+33.3%||+35.6%||+25.4%||+23.5%|
|De + Fr||+28.7%||+50.8%||+37.1%||+55.6%||+12.5%||+46.7%||+31.5%||+25.4%||+17.3%|
|En + De + Fr||+41.2%||+63.1%||+47.1%||+60.3%||+20.0%||+56.7%||+31.5%||+34.9%||+25.9%|
We used three datasets for evaluation. Some statistics of these datasets are shown in Table 1.
HotelQA is a real-world private corpus of 820 questions collected via a multi-channel communication platform for hotel guests and hotel staff. Questions are always made by guests and have ground truth labels for 28 intent classes (e.g., check-in, pool.) The utterances are professionally translated into 5 non-English languages (German (de), Spanish (es), French (fr), Japanese (ja), and Chinese (zh).) We split the dataset into training and test sets so that the sentences used for fine-tuning do not appear in the test set.
ATIS  is a publicly available corpus for spoken dialog systems and is widely used for intent classification research. The dataset consists of more than 5k sentences and 22 intent labels are assigned to each sentence. We excluded the “flights” class from the dataset since the class accounts for about 75% of the dataset. We also ensured that each class has at least 5 sentences in each of train and test datasets. As a result, 13 classes remained in the dataset. Similar to previous studies [10, 14], we used Google Translate to generate corresponding translations in the same 5 non-English languages as HotelQA.
Quora222https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs is a publicly available paraphrase detection dataset that contains over 400k questions with duplicate labels. Each row is a pair of questions with a duplicate label. Duplicate questions can be considered sentences that belong to the same intent. Therefore, we created a graph where each node is a question and an edge between two nodes denotes that these questions are considered duplicate. By doing this, we can consider each disjoint clique in the graph as a single intent class. Specifically, we filtered only complete subgraphs whose size (i.e., # of nodes) is less than 30 to avoid having extremely large clusters that are too general. We chose the 50 largest clusters after the filtering. The original dataset contains only English sentences. We used Google Translate to translate into the same 5 languages in the same manner as ATIS.
MT + sent2vec We consider the two-stage approach that uses machine translation and monolingual sentence embeddings in a pipeline333The non-English sentences obtained through MT from English had to be translated back to English. We observed some degradation in ja and zh due to the multiple application of MT.. We used Google Translate for translation and sent2vec  as a baseline method444We tested the official implementation of InferSent , finding that performance was unstable and often significantly lower than that of sent2vec. Thus, we decided to use sent2vec in the experiments..
Softmax loss Softmax loss is the most common loss function for classification, and thus a natural choice for fine-tuning the embeddings. We used the softmax loss function to train the semantic classifier and adjust the embeddings.
Contrastive loss Contrastive loss  is a widely used pairwise loss function for metric learning. The loss function minimizes the squared distance between two embeddings if the labels are the same, and it maximizes the margin (we used ) between two samples otherwise. For contrastive loss, we use the Siamese (i.e., dual-encoder) architecture  that takes two input sentences that will be fed into a shared encoder (i.e., multilingual encoder ) to obtain sentence embeddings.
N-pair loss As another metric learning method, we used the N-pair sampling cosine loss , which first samples one positive sample and negative samples and then minimizes a cosine similarity-based loss function.
For each dataset, we used only English training data to fine-tune the models with Emu and the baseline methods. To train Emu’s language discriminator, we used unlabeled training data in other non-English languages (i.e., de, es, fr, ja, zh.)
Emu variants To verify the effect of the language discriminator and the center loss, we also evaluated Emu without the language discriminator (Emu w/o LD) and Emu without the language discriminator or the center loss (Emu w/o LD+CL) as a part of an ablation study. Finally, we evaluated Emu-Parallel, which uses parallel sentences instead of randomly sampled sentences for cross-lingual adversarial training.
Hyper-parameters We used the official implementation of LASER555https://github.com/facebookresearch/LASER
and the pre-trained models including BPE. We implemented our proposed method and the baseline methods using PyTorch. We used an initial learning rate ofand optimized the model with Adam. We used a batch size of 16. For our proposed methods, we set and
. All the models were trained for 3 epochs. The architecture of language discriminatorhas two 900-dimensional fully-connected layers with a dropout rate of 0.2. The hyper-parameters were , , respectively. The language discriminator was also optimized with Adam with an initial learning rate of .
We used the leave-one-out evaluation method on the test data. For each sentence, we consider the other sentences in the test data as labeled sentences to find the nearest neighbor to predict the label. The idea is to exclude the direct translation of an input sentence in the target language to make the nearest neighbor search more challenging and to simulate the real-world setting where parallel sentences are missing. We used Acc@1 (the ratio of test sentences that are correctly categorized into the intent classes) as our evaluation metric.
Results and Discussion
Table 2 shows the experimental results on these three datasets. In Table 2 (a), Emu achieved the best performance for all the 11 tasks (en-fr, en-ja, and ja-en by Emu w/o LD and en-ja by Emu w/o LD+CL.) Emu outperformed the baseline methods including the original LASER model. In Table 2 (b), Emu achieved the best performance for 10 tasks (en-fr by Emu w/o LD+CL.) The original LASER model showed the best performance for zh-en and all of the Emu methods degraded the performance for the task. In Table 2 (c), Emu achieved the best performance for 7 tasks (en-zh by Emu w/o LD), whereas the original LASER model achieved the best performance for the rest of the tasks. From the results, Emu consistently outperformed the baseline methods, including the original LASER model. At the same time, Emu failed to improve the performance of the five tasks, namely zh-en on ATIS (Table 2 (b)) and en-fr, fr-en, ja-en, ja-zh on Quora (Table 2 (c)). We would like to emphasize that the Emu models were trained using labeled data only in English. The Emu also used unlabeled data in non-English languages. Therefore, it is noteworthy that our framework successfully specializes multilingual sentence emebeddings for multiple language pairs, which involve English, using only English labeled data. The results support that Emu is effective in semantically specializing multilingual sentence embeddings.
For all the tasks, we observe that the baseline fine-tuning methods (i.e., contrastive loss, N-pair loss, softmax loss) do not improve the performance but instead decrease the accuracy values compared to the original LASER performance. The results indicate that fine-tuning multilingual sentence embeddings is sensitive to the choice of loss functions, and -constrained softmax loss is the best choice among the loss functions.
Ablation study We conducted an ablation study to quantitatively evaluate the contribution of each component of Emu, namely, the language discriminator and the center loss. First, we compared Emu w/o LD with Emu to verify the effect of the language discriminator, and then compared Emu w/o LD and Emu w/o LD+CL to determine the effect of the center loss.
Table 5 shows the average percentage point drop (i.e., the degree of contributions) of each component. The language discriminator had a significant contribution of 2.81 points on ATIS. The contributions were 1.45 points and 1.05 points on HotelQA and Quora respectively. Similarly, the center loss had a significant impact on Quora, whereas it had almost no effect on ATIS and had a negative impact on HotelQA.
Sentence Embedding Visualization We conducted a qualitative analysis to observe how our framework with the language discriminator specialized multilingual sentence embeddings and enhanced the multilinguality. We filtered English and German sentences from the test data of the ATIS dataset and visualized sentence embeddings of (a) the original LASER model, (b) the softmax loss, (c) Emu w/o LD, and (d) Emu into the same 2D space using -SNE.
Figure 2 shows visualizations of these methods. Figure 2 shows that the original LASER sentence embeddings have multilinguality, as the sentences in the same intent in English and German were embedded close to each other. Figure 2 shows that fine-tuning the model with the softmax loss function broke not only the intent clusters but also spoiled the multilinguality. In Figure 2, Emu w/o LD successfully specialized the sentence embeddings, whereas multilinguality was degraded as the sentence embeddings of the same intent classes were separated compared to the original LASER model. Finally, Emu (with the language discriminator) moved sentence embeddings of the same intent in English and German close to each other, as shown in Figure 2.
From the results, we observe that incorporating the language discriminator enriches the multilinguality in the embedding space.
Do we need parallel sentences for Emu? We compared Emu to Emu-Parallel, which uses parallel sentences instead of randomly sampled sentences, to verify whether using parallel sentences makes multilingual adversarial learning more effective. The results are shown in Tables 2 (a)-(c). Compared to Emu, Emu-Parallel showed lower Acc@1 values on the three datasets. The decreases were -0.5 points, -1.2 points, and -5.9 points on HotelQA, ATIS, and Quora respectively. The differences are not statistically significant except for Quora. The results show that the language discriminator of Emu does not need any cost-expensive parallel corpus but can improve performance using unlabeled and non-parallel sentences in other languages.
What language(s) should we use for training? We also investigated how the performance changes by fine-tuning with training data in multiple languages other than English. To understand the insights more closely, we turned off the language discriminator in this analysis to ensure that Emu uses data only in specified languages. We summarize the relative performance of Emu w/o LD against the original LASER model on the HotelQA dataset. As discussed above, the accuracy values of tasks that involve English in at least one side (i.e., source language, target language, or both) show larger improvements than the other pairs that only involve non-English languages. This is likely because sentence embeddings of those languages were not appropriately fine-tuned compared to those of English because training data in those languages were not used.
Therefore, we hypothesized that using training data in the same language for a target and/or source language would be the best choice. To test the hypothesis, we chose English, German, and French as source/target languages and conducted additional experiments on the HotelQA dataset. The experimental settings, including the hyper-parameters, followed the main experiments, with only the training data used for fine-tuning being different.
Table 3 shows the results. When only using training data in a single language (i.e., En only, De only, Fr only), the target language was the best training data for monolingual intent classification tasks because this method achieved the best performance in the en-en, de-de, and fr-fr tasks respectively. Similarly, using the source and target languages as training data was the best configuration for methods that trained in two languages. That is, En+De achieved the best performance for the en-de and de-en tasks. En+Fr (De+Fr) also achieved the best performance for the en-fr (de-fr) and fr-en (fr-de.) Finally, the method that used training data in the three languages (En+De+Fr) showed the best accuracy values for 7 out of 9 tasks. The degradation in those two tasks occurred when En+De+Fr incorporated a language that was neither the source nor target languages (i.e., en-fr and fr-en.)
From the results, we conclude that we should focus on creating training data in a target or source language to obtain the best performance with Emu and use our budget effectively.
Multilingual embedding techniques  have been well studied, and most of the prior work has focused on word embeddings. However, relatively fewer techniques have been developed for multilingual sentence embeddings. This is because such techniques [18, 30] require parallel sentences for training multilingual sentence embeddings and some use both sentence-level and word-level alignment information . Recently developed LASER [30, 3] trains a language-agnostic sentence embedding model with a large number of translation tasks on a large-scale parallel corpora.
Similar to the center loss used in this paper, two techniques have incorporated cluster-level information [20, 12] to enhance the compactness of word clusters to improve the quality of multilingual word embedding models. None of them have directly used the centroid of each class to calculate loss values for training.
developed a technique that uses a language discriminator to train a cross-lingual sentiment classifier. Our framework is similar in the use of a language discriminator, but our novelty is that it uses a language discriminator for learning multilingual sentence embeddings instead of cross-lingual transfer.
There is a line of work in post-processing word embedding models called word embedding specialization [13, 21, 23]. Prior work specialized word embeddings with different external resources such as semantic information . The common approaches are (1) a post-hoc learning  that uses additional loss function to tune pre-trained embeddings, (2) learning an additional model [15, 32], and (3) the fine-tuning approach , which is similar to our fine-tuning approach. However, to the best of our knowledge, we are the first to approach semantic specialization of multilingual sentence embeddings.
We have presented Emu, a semantic specialization framework for multilingual sentence embeddings. Emu incorporates multilingual adversarial training on top of fine-tuning to enhance multilinguality without using parallel sentences.
Our experimental results show that Emu outperformed the baseline methods including state-of-the-art multilingual sentence emebeddings, LASER, and monolingual sentence embeddings after machine translation with respect to multiple language pairs. The results also show that Emu can successfully train a model using only monolingual labeled data and unlabeled data in other languages.
-  (2019) Enriching word embeddings with a regressor instead of labeled corpora. In Proc. AAAI ’19, Cited by: Related Work.
-  (2017) Wasserstein generative adversarial networks. In Proc. ICML ’17, Vol. 70, pp. 214–223. Cited by: Language Discriminator.
-  (2019) Margin-based parallel corpus mining with multilingual sentence embeddings. In Proc. ACL ’19, Cited by: Related Work.
-  (2018) Unsupervised multilingual word embeddings. In Proc. EMNLP ’18, pp. 261–270. Cited by: Language Discriminator, Related Work.
-  (2018) Adversarial deep averaging networks for cross-lingual sentiment classification. Transactions of the Association for Computational Linguistics 6 (), pp. 557–570. Cited by: Language Discriminator, Related Work.
-  (2018) Learning cross-lingual sentence representations via a multi-task dual-encoder model. arXiv preprint arXiv:1810.12836. Cited by: Introduction, Introduction.
-  (2005) Learning a similarity metric discriminatively, with application to face verification. In Proc. CVPR ’05, pp. 539–546. Cited by: Baselines.
-  (2017) Supervised learning of universal sentence representations from natural language inference data. In Proc. EMNLP ’17, Cited by: Introduction, footnote 4.
-  (2018) What you can cram into a single $&!#* vector: probing sentence embeddings for linguistic properties. In Proc. ACL ’18, pp. 2126–2136. Cited by: Introduction.
-  (2018) Word translation without parallel data. In Proc. ICLR ’18, Cited by: Dataset, Related Work.
-  (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: Introduction, Multilingual Encoder.
-  (2018) Improving cross-lingual word embeddings by meeting in the middle. In Proc. EMNLP ’18, pp. 294–304. Cited by: Related Work.
Retrofitting word vectors to semantic lexicons. In Proc. NAACL-HLT ’15, pp. 1606–1615. Cited by: Related Work.
-  (2019) How to (properly) evaluate cross-lingual word embeddings: on strong baselines, comparative analyses, and some misconceptions. In Proc. ACL ’19 (to appear), Cited by: Dataset.
-  (2018) Explicit retrofitting of distributional word vectors. In Proc. ACL ’18, pp. 34–45. Cited by: Related Work.
-  (2014) Generative adversarial nets. In Proc. NIPS ’14, pp. 2672–2680. Cited by: Language Discriminator, Related Work.
-  (1990) The ATIS spoken language systems pilot corpus. In Proc. the Workshop on Speech and Natural Language, HLT ’90, pp. 96–101. Cited by: Dataset.
-  (2014) Multilingual Models for Compositional Distributional Semantics. In Proc. ACL ’14, Cited by: Related Work.
-  (2018) Universal language model fine-tuning for text classification. In Proc. ACL ’18, pp. 328–339. Cited by: Introduction.
-  (2018) Multi-lingual common semantic space construction via cluster-consistent word embedding. In Prc. EMNLP ’18, Cited by: Related Work.
-  (2015) Specializing word embeddings for similarity or relatedness. In Proc. EMNLP ’15, pp. 2044–2048. Cited by: Related Work.
-  (2015) Bilingual word representations with monolingual quality in mind. In Proc. RepL4NLP ’15, pp. 151–159. Cited by: Related Work.
-  (2017) Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the Association for Computational Linguistics 5, pp. 309–324. Cited by: Introduction, Related Work.
-  (2018) Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. In NAACL-HLT ’18, Cited by: Introduction, Baselines.
-  (2003) Open-domain question answering from large text collections. MIT Press. Cited by: Evaluation.
-  (2018) Learning semantic sentence embeddings using pair-wise discriminator. In Proc. COLING ’18, Cited by: Related Work.
-  (2018) Deep contextualized word representations. In Proc. NAACL-HLT ’18, pp. 2227–2237. Cited by: Introduction.
-  (2017) L2-constrained softmax loss for discriminative face verification. arXiv prepring arXiv:1703.09507 abs/1703.09507. Cited by: Semantic Classifier, Semantic Classifier.
-  (2019) Cross-lingual word embeddings. Morgan & Claypool Publishers. Cited by: Introduction, Related Work.
-  (2019) Analysis of joint multilingual sentence representations and semantic k-nearest neighbor graphs. In Proc. AAAI ’19, pp. 6982–6990. Cited by: Introduction, Introduction, Multilingual Encoder, Related Work.
-  (2016) Neural machine translation of rare words with subword units. In Proc. ACL ’16, pp. 1715–1725. Cited by: Multilingual Encoder.
-  (2018) Post-specialisation: Retrofitting vectors of words unseen in lexical resources. In Proc. NAACL-HLT ’19, Cited by: Related Work.
-  (2017) A survey on learning to hash. IEEE Transactions on On Pattern Analysis and Machine Intelligence (TPAMI) 40 (4), pp. 769–790. Cited by: Multilingual Semantic Specialization.
-  (2016) A discriminative feature learning approach for deep face recognition. In Proc. ECCV ’16, pp. 499–515. Cited by: Semantic Classifier, Semantic Classifier.
-  (2019) Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax. arXiv preprint arXiv:1902.08564. Cited by: Baselines.
-  (2018) Exploring semantic properties of sentence embeddings. In Proc. ACL ’18, pp. 632–637. Cited by: Introduction, Introduction.