DeepAI
Log In Sign Up

USCORE: An Effective Approach to Fully Unsupervised Evaluation Metrics for Machine Translation

02/21/2022
by   Jonas Belouadi, et al.
Posteo
0

The vast majority of evaluation metrics for machine translation are supervised, i.e., (i) assume the existence of reference translations, (ii) are trained on human scores, or (iii) leverage parallel data. This hinders their applicability to cases where such supervision signals are not available. In this work, we develop fully unsupervised evaluation metrics. To do so, we leverage similarities and synergies between evaluation metric induction, parallel corpus mining, and MT systems. In particular, we use an unsupervised evaluation metric to mine pseudo-parallel data, which we use to remap deficient underlying vector spaces (in an iterative manner) and to induce an unsupervised MT system, which then provides pseudo-references as an additional component in the metric. Finally, we also induce unsupervised multilingual sentence embeddings from pseudo-parallel data. We show that our fully unsupervised metrics are effective, i.e., they beat supervised competitors on 4 out of our 5 evaluation datasets.

READ FULL TEXT VIEW PDF
10/15/2020

Unsupervised Bitext Mining and Translation via Self-trained Contextual Embeddings

We describe an unsupervised method to create pseudo-parallel corpora for...
04/12/2021

Macro-Average: Rare Types Are Important Too

While traditional corpus-level evaluation metrics for machine translatio...
04/13/2022

Better Uncertainty Quantification for Machine Translation Evaluation

Neural-based machine translation (MT) evaluation metrics are progressing...
05/12/2021

Discrete representations in neural models of spoken language

The distributed and continuous representations used by neural networks a...
05/09/2022

Building Machine Translation Systems for the Next Thousand Languages

In this paper we share findings from our effort to build practical machi...
10/05/2016

Word2Vec vs DBnary: Augmenting METEOR using Vector Representations or Lexical Resources?

This paper presents an approach combining lexico-semantic resources and ...
09/20/2022

Can we do that simpler? Simple, Efficient, High-Quality Evaluation Metrics for NLG

We explore efficient evaluation metrics for Natural Language Generation ...

1 Introduction

Evaluation metrics are essential for judging progress in natural language generation (NLG) tasks such as machine translation (MT) and summarization, as they identify the state-of-the-art in a key NLP technology. Despite their wide dissemination, it has recently become more and more evident that classical lexical overlap evaluation metrics like BLEU 

(Papineni et al., 2002) and ROUGE (Lin, 2004) are unsuitable as metrics, especially when judging the quality of modern NLG systems (Mathur et al., 2020; Marie et al., 2021), necessitating the need for novel (BERT-based) evaluation metrics that correlate better with human judgments. This has been a very active research area in the last 2-3 years, cf. (Zhang et al., 2020; Zhao et al., 2020, 2019; Colombo et al., 2021; Yuan et al., 2021; Zhao et al., 2022).111Of course, the search for high quality metrics dates back at least to the invention of BLEU and its predecessors.

man

Mann

cats

Hunde

I love cats Er mag Hunde
It was great Toll!

Figure 1: Relationship between metrics , representation spaces, parallel data, and MT systems.

A deficit of most evaluation metrics (classical or more recent ones) is their need for some form of supervision, requiring human involvement: (i) TYPE-1: most metrics use supervision in the form of human references which they compare to system outputs (reference-based metrics) (Yuan et al., 2021; Zhao et al., 2019; Zhang et al., 2020); (ii) TYPE-2: some metrics are trained on human assessments such as Direct-Assessment (DA) or Post-Editing (PE) scores (Sellam et al., 2020; Rei et al., 2020); (iii) TYPE-3: there are also so-called reference-free metrics which do not necessarily use supervision in the form of (i) or (ii). However, to work well, they still use parallel data (Zhao et al., 2020; Song et al., 2021), which is considered a form of supervision, e.g., in the MT community (Artetxe et al., 2018; Lample et al., 2018), or are fine-tuned as in (ii) (Ranasinghe et al., 2021).

In this work, we aim for fully unsupervised evaluation metrics (for MT) that do not use any form of supervision. In addition, subject to the constraint that no supervision is allowed, our metrics should be of maximally high quality, i.e., correlation with human assessments. We have two use cases in mind: (a) Such sample efficiency222We use the term sample efficiency in a generalized sense to denote the amount of supervision required. is a prerequisite for the wide applicability of the metrics. This is especially important when we want to overcome the current English-centricity (Anastasopoulos and Neubig, 2020) of MT systems and evaluation metrics and also cover much lower-resource language pairs (Fomicheva et al., 2021). We point out that the languages involved for which our approach is relevant need not necessarily be low-resource individually; the particular pairing can also be low-resource, e.g., Latvian-Chinese, for which it may be difficult to obtain human supervision signals. (b) Our fully unsupervised evaluation metrics should be considered strong lower bounds for any future work that uses (mild) forms of supervision for metric induction, i.e., we want to push the lower bounds for newly developed TYPE-k metrics.

To achieve our goals, we employ self-learning (He et al., 2020; Wei et al., 2021) and in particular, we leverage the following dualities to make our metrics maximally effective, cf. Figure 1: (a) Evaluation metrics and NLG systems are closely related; e.g., a metric can be an optimization criterion for an NLG system (Böhm et al., 2019), and a system can conversely generate pseudo references (a.o.) from which to improve a metric; (b) evaluation metrics and parallel corpus mining (Artetxe and Schwenk, 2019) are closely related; e.g., a metric can be used to mine parallel data, which in turn can be used to improve the metric (Zhao et al., 2020), e.g., by remapping deficient embedding spaces. Our contributions are the following:

We show that effective unsupervised evaluation metrics can be obtained by exploiting relationships with parallel corpus mining approaches and MT system induction;

to do so, we explore ways to (a) make parallel corpus mining with Word Mover Distance-based metrics efficient (e.g., overcome cubic runtime complexity) and (b) induce unsupervised multilingual sentence embeddings from pseudo-parallel data;

we show that pseudo-parallel data can rectify deficient vector spaces such as mBERT;

we show that our metrics beat three current state-of-the-art supervised metrics on four out of five datasets that we evaluate on.

2 Methods

We take inspiration from three recent supervised (reference-free; TYPE-3) metrics to induce our own unsupervised metric UScore. The three metrics are: XMoverScore (Zhao et al., 2020), DistilScore (Reimers and Gurevych, 2020), and SentSim (Song et al., 2021). Below, we briefly review key aspects of each of the three metrics (in §2.1,2.2,2.3), show where supervision plays a role and how we plan to eliminate it in our new contributions (§2.4,2.5,2.6).

2.1 XMoverScore

Central to XMoverScore is the use of Word Mover’s Distance (WMD) as a measure of similarity between an MT hypothesis and a source text (Zhao et al., 2020). WMD and further enhancements are discussed below.

Word Mover’s Distance

Word Mover’s Distance is a distance function that compares sentences at the token level (Kusner et al., 2015), by leveraging word embeddings. XMoverScore obtains these word embeddings by extracting the hidden states of the last layer of mBERT (Devlin et al., 2019). From a source sentence and a MT hypothesis , WMD constructs a (word travel cost) distance matrix , where is the distance between two word embeddings, ; index respective words in ,

. WMD uses these word travel costs to compute a distance between the two sentences. This cost can be defined as the linear programming problem

(1)

where is an alignment matrix with denoting how much of word travels to word . The two constraints prevent the degenerate solution where

is a zero matrix.

Monolingual Subspace Realignment

Zhao et al. (2020), akin to similar earlier and subsequent work (Cao et al., 2020; Schuster et al., 2019), argue that the monolingual subspaces of mBERT are not well aligned, i.e., the embeddings for similar words in different languages could be far apart. As a remedy, they investigate linear projection methods which alter the vector representations of the source language words using parallel data (i.e., a supervision signal), with the goal to post-hoc improve cross-lingual alignments. We refer to this approach as vector space remapping. XMoverScore explores two different remapping approaches, CLP and UMD. They both leverage parallel data on sentence-level from which they extract word-level alignments using fast-align, which are then used for the vector space remapping. We give more details in the appendix (§A.1).

Language Model

XMoverScore linearly combines WMD with the perplexity of a GPT-2 language model (Radford et al., 2019). Allegedly this penalizes ungrammatical translations. This updates the scoring function of XMoverScore to

(2)

and are weights for the cross-lingual WMD and LM components of XMoverScore.

2.2 DistilScore

Reimers and Gurevych (2020) show that the cosine between multilingual sentence embeddings captures semantic similarity and can be used to assess multi- and cross-lingual semantic textual similarity. Their approach to inducing embedding models is based on multilingual knowledge distillation. We refer to this metric as DistilScore. Their approach requires supervision at multiple levels. First, parallel sentences are needed to induce multilingual models, and second, NLI and STS corpora are required to induce teacher embeddings in the source language.

2.3 SentSim

A key difference between XMoverScore and DistilScore is that one approach is based on word embeddings and the other one on sentence embeddings. Song et al. (2021) and Kaster et al. (2021) show that combining approaches based on word-level and sentence-level representations can substantially improve metrics. The metric of Song et al. (2021), which is called SentSim, combines supervised DistilScore and a word embedding-based metric. Overall, the authors explore two word embedding-based metrics. The first one is quite similar to XMoverScore, as it is also based on WMD. The other one is a multilingual variant of BERTScore (Zhang et al., 2020).

2.4 UScorewmd

To overcome the need for supervision in XMoverScore, we aim to replace the parallel sentences used during remapping with pseudo-parallel data (Tran et al., 2020). Since fast-align’s parameter optimization depends directly on how well sentences are aligned (Dyer et al., 2013), we replace it in XMoverScore with an unsupervised variant of awesome-align (Dou and Neubig, 2021) which only relies on pre-trained language models. This allows for completely unsupervised remapping, and we call the resulting metric UScorewmd. In the following, we explain our approach to obtaining pseudo-parallel data.

Figure 2: UScorewmd with pseudo translations (left), unsupervised remapping (middle) and a LM (right).

Efficient WMD-based Corpus Mining

Metrics such as XMoverScore could in principle be used for pseudo-parallel corpus mining since they allow arbitrary sentences to be compared. However, when WMD-based metrics are scaled to corpus mining, algorithmic efficiency problems arise: (a) the computational complexity of WMD scales cubically with the number of words in a sentence (Kusner et al., 2015); (b) to compare source sentences to target sentences, WMD invocations are necessary, which quickly becomes intractable. Thus, we explore ways to improve the performance of WMD for efficient pseudo-parallel corpus mining. Kusner et al. (2015) define an approximation of WMD called word centroid distance (WCD; linear complexity) and use it to define a prefetch and pruning algorithm for fast similarity search; they include a second approximation called relaxed word moving distance (RWMD; with quadratic complexity), which we omit however. The algorithm first sorts all target samples according to their WCD to a given query and computes exact WMD for the nearest neighbors.

We use this approach iteratively (cf. Figure 1): we start out with an initial WMD metric (based on mBERT), obtain pseudo-parallel data with it (via the approach described), use UMD and CLP to remap mBERT (using word alignments from unsupervised awesome-align) from which we obtain a better WMD metric; then we iterate.

Pseudo References

Apart from remapping, pseudo-parallel corpora could potentially overcome subspace realignment problems in other ways. Specifically, we want to mine enough pseudo-parallel data to train an unsupervised MT system to translate source sentences into the target language to create so-called pseudo references (Albrecht and Hwa, 2007; Gao et al., 2020; Fomicheva et al., 2020). This would allow for a comparison with the hypothesis in the target language only, similar to reference-based metrics, eliminating the problem of mismatches in multilingual embeddings. This approach updates UScorewmd to

(3)

where denotes the iterations of remapping, and is a new weight to control the influence of the pseudo reference on the total score. Tran et al. (2020) show that fine-tuning the mBART transformer using pseudo-parallel data leads to very promising results, so we use it for our experiments as well. All components of UScorewmd are illustrated in Figure 2.

2.5 UScorecos

Besides our word-based metric, we induce an unsupervised metric based on the cosine similarity between sentence embeddings, which we refer to as

UScorecos. One could, similarly to UScorewmd, use pseudo-parallel data to perform knowledge distillation, e.g., in DistilScore, to induce unsupervised multilingual sentence embeddings. As our initial experiments in this direction were unsuccessful, we chose another approach to induce unsupervised sentence embeddings.

Contrastive Learning

We explore contrastive learning for unsupervised multilingual sentence embedding induction, which has recently been successfully used to train unsupervised monolingual sentence embeddings (Gao et al., 2021). In our context, the basic idea behind contrastive learning is to pull semantically close sentences together and to push distant sentences apart in the embedding space. Let and be the embeddings of two sentences that are semantically related and an arbitrary batch size. The contrastive training objective for this pair can be formulated as

(4)

where

is a temperature hyperparameter that can be used to either amplify or dampen the assessed distances. For each sentence

, all remaining sentences in the current batch can be used as so-called in-batch negatives; those should be pushed apart in the embedding space. For positive sentences that should be pulled together in the embedding space, we again use pseudo-parallel sentence pairs as positive training instances. We use pooled XLM-R embeddings as sentence representations, and, as with unsupervised remapping, we plan to experiment with multiple iterations of successive pseudo-parallel sentence mining and sentence embedding induction operations. To make this possible, the pseudo-parallel data must be processed using UScorecos.

Ratio Margin-based Corpus Mining

As UScorecos is based on sentence embeddings, we cannot use the prefetch and pruning algorithm for mining since it requires access to word-level representations. An alternative would be to just use cosine similarity for mining, but Artetxe and Schwenk (2019) show that this approach often retrieves badly aligned sentence pairs. Instead, we follow Artetxe and Schwenk (2019) and use a ratio margin function defined as

(5)

where and are the nearest neighbors of sentence embeddings and in the respective language. Informally, this ratio margin function divides the cosine similarities of the nearest neighbor by the average similarities of the neighborhood.

2.6 UScorewmd  cos

Inspired by SentSim, which combines word and sentence embeddings, we similarly ensemble UScorewmd and UScorecos. We refer to this final metric as UScore UScorewmd  cos with two new weights and :

(6)

3 Experiments

In this section, we evaluate all UScore variants at the segment level and compare them to TYPE-1/2/3 upper bounds. We do not evaluate at the system level because metrics there often perform very similarly and achieve very high correlations, making it difficult to determine the best metric (Mathur et al., 2020; Freitag et al., 2021).

3.1 Datasets

We use various datasets to evaluate the performance of our proposed metrics. Most of them consist of pairs of source sentences, machine-translated hypotheses, and human-annotated scores, allowing us to compute the correlation with human assessments using Pearson’s r correlation. We also evaluate our metrics as pseudo-parallel corpus mining tools, for which we report Precision at N (P@N) on parallel sentence matching, a standard evaluation measure in the parallel corpus mining field (Guo et al., 2018; Kvapilíková et al., 2020). The task is to search a set of shuffled parallel sentences to recover correct translation pairs.

MT evaluation

In WMT-16, each language pair consists of tuples of source sentences from the news domain, machine-translated hypotheses, and reference translations. Each tuple was annotated with a direct assessment (DA) score, which quantifies the adequacy of the hypothesis given the reference translation. Following Zhao et al. (2020) and Song et al. (2021), we use these DA scores also to assess the adequacy of the hypothesis given the source. We also make use of the analogous dataset from the following year, which we refer to as WMT-17. In this, some language pairs and directions were changed. MLQE-PE

has been used in the WMT 2020 Shared Task on Quality Estimation 

(Specia et al., 2020). MLQE-PE only provides source sentences and hypotheses for its language pairs, with no references. Each source sentence and hypothesis pair was annotated with cross-lingual direct assessment (CLDA) scores. In terms of annotation, Eval4NLP is very similar to MLQE-PE. However, it focuses on non-English-centric language directions, especially de-zh and ru-de. WMT-MQM uses fine-grained error annotations from the Multidimensional Quality Metrics (MQM) framework (Freitag et al., 2021) for adequacy assessments. Here, MQM is used to structure all possible problems that may occur during MT into a hierarchy, evaluate them separately, and aggregate them into a single score using adjusted weightings. Like MLQE-PE and Eval4NLP, WMT-MQM also assigns scores based on source sentences and hypotheses.

Using ISO 639-1 codes, our datasets cover the language pairs: de-zh, ru-de, en-ru, en-zh, cs-en, de-en, en-de, et-en, fi-en, lv-en, ne-en, ro-en, ru-en, si-en, tr-en, zh-en.

Parallel sentence matching

To evaluate our metrics on parallel sentence matching, we use the News Commentary333http://data.statmt.org/news-commentary dataset. It consists of parallel sentences crawled from economic and political data. We use News Commentary v15 as for the WMT-20 Machine Translation of News Shared Task (Barrault et al., 2020).

3.2 Preliminary Studies on de-en

We conduct preliminary experiments to gain an understanding of the properties of iterative techniques and the influence of individual parameters. For these experiments, we only use the de-en language direction of WMT-16 and News Commentary v15.

Monolingual Subspace Realignment

We explore if the CLP and UMD remapping methods also work with word alignments extracted from pseudo-parallel sentence pairs. We use News Crawl for these experiments. Since large corpora tend to include low-quality data points, we follow Artetxe and Schwenk (2019) and Keung et al. (2021) and apply three simple filtering techniques, described in the appendix (§A.2). For mining, we randomly extract 40k monolingual sentences per language direction, set for the prefetch and pruning algorithm, and select the best k sentence pairs with the highest similarity scores. This gives us the same number of sentences as were used for training the remapping matrices already provided by XMoverScore.

The results for UMD and CLP-based remapping on the de-en language direction can be seen in Figure 3.

Figure 3: Results of unsupervised subspace alignment for the de-en language direction on shuffled parallel data. Pearson’s r is computed on WMT-16 and P@1 is computed on News Commentary v15.

The figure contains two graphs, one for correlation with human judgments and one for precision on the parallel sentence matching task. Each graph illustrates model performance before remapping (depicted as Iteration 0) and after remapping one to five times. After one iteration of remapping, both UMD and CLP achieve substantial improvements in Pearson’s r correlation. The improvement of CLP, however, is noticeably larger. For subsequent iterations, UMD seems to continue to improve slightly, but the correlations of CLP seem to drop. This can be explained by the results for precision where the P@1 of CLP drops each iteration, meaning the remapping capabilities of the metrics decrease. UMD does not exhibit this problem. We conclude that UMD could be a more robust choice for metrics that should perform reasonably well on both tasks.

Pseudo References & Language Model

Next, we add a language model to the metric and investigate pseudo-parallel corpus mining to train an MT system for pseudo references. Since fine-tuning for MT is a very resource-intensive undertaking requiring many parallel sentence pairs (Barrault et al., 2020), especially compared to our subspace realignment experiments, we use considerably more training data. Tran et al. (2020) roughly use between 100k and 300k pseudo-parallel sentence pairs to train their final mBART model, so for fine-tuning an MT system, we increase the number of monolingual sentences used for mining by a factor of 100. This means we now have a pool of 4m sentences per language direction, again taken from News Crawl. We extract the top 5% pseudo-parallel sentence pairs and thus have 200k samples to train on. Our results on the de-en data of WMT-16 are reported in Figure 4, which is similar to an ablation study.

Figure 4: Heatmap illustrating the influence of a language model and a machine translation system on UScorewmd. We report segment-level Pearson’s r for different values of (weight for pseudo references) and (weight for the language model) on the WMT-16 dataset. Note that the point uses only ; see Equation 3.

On the x-axis, we vary the weight for UScorewmd with pseudo references, and on the y-axis, we explore different weights for the language model. We set . Without a language model (i.e. ), the best results are achieved with . When combining both approaches, the overall best performance is achieved with and . The improvement when pseudo references and a language model are included in the metric is substantial, highlighting their effectiveness—e.g., we improve from 28% correlation with humans to 49% with the best weight combination, an improvement of 75%.

Contrastive Learning

For UScorecos, we additionally filter our retrieved pseudo-parallel data. Up to now, we aligned each source sentence with the best matching target sentence, which could lead to multiple source sentences being aligned to the same target sentence. While this was not problematic in our previous experiments, it could lead to complications for contrastive learning since the same positive sentences could also appear as in-batch negative instances. As a remedy, we discard all sentence pairs where the target sentence has already occurred before. Since the additional filtering means that we have fewer potential sentence pairs to choose from, we decided to only use the best 2.5% sentence pairs to train the models. As we again use 4m monolingual sentences per language, this means our training datasets consist of 100k sentence pairs. The results of UScorecos are shown in Figure 5.

Figure 5: Correlation and precision of UScorecos. The scores were computed on WMT-16 and News Commentary v15.

The P@1 scores seem to steadily improve over the previous best result every two training iterations. Beginning with the sixth iteration, the precision seems to converge.

Table 2 shows pseudo-parallel data mined with UScorecos and with UScorewmd after remapping once with UMD. The mined sentences are semantically similar, but contain factuality errors (e.g., have wrong places or numbers in hypotheses).

3.3 Other Languages

We now test our unsupervised metrics on other languages and datasets. For UScorewmd, we remap mBERT once with UMD and make use of a language model and pseudo references obtained from an MT system using the best weight configuration identified in Section 3.2. For UScorecos, we train its sentence embedding model for six iterations. We also evaluate UScorewmd  cos. Based on the experiments in Section 3.2, we determined and to be the best configuration for ensembling.

Supervised Metrics (TYPE-1/2) WMT-16 WMT-17 MLQE-PE Eval4NLP WMT-MQM
BLEU 47.82 47.01 19.76
MonoTransQuest 64.68 66.47 66.28 42.21 42.23
COMET-QE 65.76 68.77 49.97 40.00 45.89
Supervised Metrics (TYPE-3)
XMoverScore (UMD) 49.96 51.02 33.99 29.60 20.07
XMoverScore (CLP) 53.10 55.09 31.98 36.93 20.59
SentSim (BERTScore) 51.86 55.57 45.36 26.60 13.71
SentSim (WMD) 50.66 54.29 44.72 24.44 12.24
DistilScore 43.79 51.22 43.31 28.90 2.20
Fully-Unsupervised Metrics
UScorewmd 53.38 53.52 31.13 36.96 23.28
UScorecos 34.63 42.68 37.51 20.39 -0.47
UScorewmd  cos 55.13 57.55 40.60 39.87 19.35
Table 1: Segment-level Pearson’s r correlations with human judgments on all datasets, averaged over language directions.

The Pearson’s r correlations with human judgments averaged over language pairs are shown in Table 1 for all metrics (results on individual language pairs can be found in the Appendix A). For comparison purposes, we also present the results of the popular TYPE-1 metric BLEU, where possible, and the recent trained TYPE-2 metrics MonoTransQuest (Ranasinghe et al., 2020b, a) and COMET-QE (Rei et al., 2021). Finally, as more direct competitors, we compare to the TYPE-3 metrics XMoverScore and SentSim.

Expectedly, DistilScore, which uses parallel data, is always better than UScorecos, from 2-10 points correlation. In contrast, UScorewmd is generally on par with XMoverScore, even though XMoverScore uses parallel data—the difference is that UScorewmd also leverages pseudo-references which XMoverScore does not. From Figure 4, we observe that the pseudo-references can make an improvement of up to 1-11 points in correlation (comparing ‘column’ labeled to the columns ).

We beat reference-based TYPE-1 BLEU across the board. TYPE-2 metrics, which are fine-tuned on human scores, are generally the best. They exhibit 10+ points higher correlation on four of of five datasets than our metrics. Intruigingly, the only two language pairs where our metrics are on par are the non-English de-zh and ru-de from Eval4NLP. These languages are outside the training scope of the current TYPE-2 metrics and thus test their generalization abilities. For example, on ru-de our best metric outperforms MonoTransQuest by 5 points correlation and COMET-QE by 9 points (see Table 6 in the appendix). This indicates an interesting application scenario for TYPE-3 metrics as well as our class of metrics.

Surprisingly, our unsupervised metrics also outperform the TYPE-3 upper bounds on four out of five datasets. Compared to TYPE-3 competitors, on WMT-16, WMT-17, and on Eval4NLP, our combined metric has the best overall results. On MQM-WMT, UScorewmd alone has the highest correlation score. The drop in performance for the combined metric is caused by UScorecos, which on its own achieves astonishingly bad correlations. However, supervised DistilScore exhibits the same issues. Thus, this could be a general problem for metrics based on sentence embeddings on this dataset.

For MLQE-PE, the SentSim metrics perform best on average among TYPE-3 and our metrics (although our reproduced scores for this dataset differ noticeably from the authors’ results because their original script read the human scores from an incorrect data column). Among our self-learned metrics, the combined variant performs best on average again, but still is 3-5 points below SentSim and DistilScore, even though it outperforms both XMoverScore variants by over 6 points. Interestingly, by itself, UScorecos works better than UScorewmd, unlike for the other datasets. Similarly, DistilScore clearly outperforms XMoverScore. One reason for this unusual behavior is the usage of mBERT in UScorewmd. MLQE-PE contains sentences in Sinhala, a language not present in the training data of mBERT. Another explanation is the data collection scheme for ru-en, which uses different sources of parallel sentences. These sources are mainly made of colloquial data and Russian proverbs, which use rather unconventional grammar (Fomicheva et al., 2020a). This unconventional grammar apparently confuses the language model. Similarly, we believe that the MT system has problems translating colloquial sentences because it has been trained on news data where formal writing is used. When we exclude si-en and ru-en from MLQE-PE, UScorewmd  cos performs best, with a Pearson’s r of 44.22 vs. 43.82 for SentSim (BERTScore).

In §A.3, we show that incorporating real parallel data (in addition to pseudo-parallel data) at an order of magnitude lower than that which SentSim uses allows us to outperform SentSim on MLP-QE also.

4 Discussion

Limitations of our metrics include (1) algorithmic inefficiency, (2) resource inefficiency, (3) the brittleness of unsupervised MT systems in certain situations, and (4) hyperparameters. (1) Some of the components of UScorewmd (mainly the MT system) lead to substantial computational overhead and make inference slow. To put this into perspective, XMoverScore and SentSim (BERTScore) take less than 30 seconds to score 1000 hypotheses

on an Nvidia V100 Tensor Core GPU

. UScorewmd, on the other hand, takes over 2.5 minutes. This algorithmic inefficiency trades off with our sample efficiency, by which we did not use any supervision signals. In future work, we aim to experiment with efficient MT architectures (e.g., distilled versions) to reduce computational costs.

(2) Similarly as XMoverScore, MonoTransQuest or SentSim, our metrics use high-quality encoders such as BERT or GPT, which are not only memory and inference inefficient but also leverage large monolingual resources. Future work should thus not only investigate using smaller BERT models but also models that leverage smaller amounts of monolingual resources.

(3) Concerning the inclusion of unsupervised MT approaches, as we leverage via pseudo references, even though they may be less effective for truly low-resource languages (Marchisio et al., 2020), this remains a very active and fascinating field of research with a constant influx of more powerful solutions (Ranathunga et al., 2021; Sun et al., 2021).

(4) Even though we presented our approach as fully unsupervised, it still has three tunable weights. Figure 4 shows that these may have a big influence on the outcomes. In this work, we set these hyperparameters with reference to one high-resource language pair (de-en) only, which is a very mild form of supervision. This may on the other hand also mean that our model might perform much better when more suitable language-specific choices were set.

5 Related Work

All metrics in this work presented so far treated the MT model generating the hypotheses as a black-box which is otherwise not further involved in the scoring process. There also exists a recent line of work of so-called glass-box metrics, which actively incorporate the MT model under test into the scoring process (Fomicheva et al., 2020b, 2020). In particular, Fomicheva et al. (2020) explore whether the MT model under test can be used to generate additional hypotheses (Dreyer and Marcu, 2012). They then define various reference-based and reference-free metrics. A crucial difference to our metrics is the required availability of the original MT model, which we are agnostic about. The MT models used in Fomicheva et al. (2020) are all trained on parallel data, which makes their approach a supervised metric in our sense.

Other recent metrics that leverage the relationship between metrics and (MT) systems are Prism (Thompson and Post, 2020) and BARTScore (Yuan et al., 2021)

. We do not classify them as unsupervised, however, as

Prism is trained from scratch on parallel data and BARTScore uses a BART model fine-tuned on labeled summarization or paraphrasing datasets.

There are also multilingual sentence embedding models which are highly relevant in our context. Kvapilíková et al. (2020), for example, fine-tune XLM-R with translation language modeling on synthetic data translated with an unsupervised MT system. Similar to our contrastive learning approach, the resulting embedding model is completely unsupervised. Important differences are that our sentence embedding model can be improved iteratively and does not rely on an MT system. We leave a comparison to future work.

Another relevant multilingual sentence embedding model is the supervised model LaBSE (Feng et al., 2020). The fine-tuning task of LaBSE consists of optimizing a so-called additive margin softmax loss (Yang et al., 2019) on parallel sentences. This is an instance of a contrastive training objective that shares some similarities with the contrastive loss of our UScorecos. A crucial difference, however, is the presence of a margin parameter. LaBSE achieves state-of-the-art performance on various bitext retrieval and corpus mining tasks but performs worse than comparable sentence embedding models on assessing semantic similarity. We suspect that this is due to the additional margin parameter, which may bias the assessed distance for similar sentences that are not perfect translations of each other.

Finally, the idea of fully unsupervised text generation systems has originated in the MT community (Artetxe et al., 2018; Lample et al., 2018; Artetxe et al., 2019). Given the similarity of MT systems and evaluation metrics, designing fully unsupervised evaluation metrics is an apparent next step, which we take in this work.

6 Conclusion

In this work, we aimed for sample efficient evaluation metrics that do not use any form of supervision signals. In addition, our novel metrics should be maximally effective, i.e., of high quality. To achieve this, we leveraged pseudo-parallel data obtained from fully unsupervised evaluation metrics in an iterative manner. We also exploited pseudo references from unsupervised MT systems as an alternative to original human references. We showed that such an approach can lead to substantial quality boosts when the right choices of parameters for the components are chosen. Moreover, we showed that our approach is effective and can outperform three supervised upper bounds (making use of parallel data) on 4 out of 5 datasets we included in our comparison.

In future work, we want to aim for algorithmic efficiency, include pseudo source texts as additional components (using the MT system in backward translation) and use the MT system to generate additional (better) pseudo-parallel data. We also think that our approach still has substantial room for improvement given that we selected hyperparameters based on one high-resource language pair (de-en) only. Thus, it will be particularly intriguing to explore weakly-supervised approaches which leverage minimal forms of supervision. It will also be interesting to explore other ways of inferring unsupervised metrics, e.g., adapting BERTScore (Zhang et al., 2020) or BARTScore (Yuan et al., 2021).

References

  • J. Albrecht and R. Hwa (2007) Regression for sentence-level MT evaluation with pseudo references. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, pp. 296–303. External Links: Link Cited by: §2.4.
  • A. Anastasopoulos and G. Neubig (2020) Should all cross-lingual embeddings speak English?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 8658–8679. External Links: Link, Document Cited by: §1.
  • M. Artetxe, G. Labaka, E. Agirre, and K. Cho (2018)

    Unsupervised neural machine translation

    .
    In International Conference on Learning Representations, External Links: Link Cited by: §1, §5.
  • M. Artetxe, G. Labaka, and E. Agirre (2019) An effective approach to unsupervised machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 194–203. External Links: Link, Document Cited by: §5.
  • M. Artetxe and H. Schwenk (2019) Margin-based parallel corpus mining with multilingual sentence embeddings. In ACL, Cited by: §1, §2.5, §3.2.
  • L. Barrault, M. Biesialska, O. Bojar, M. Costa-jussà, C. Federmann, Y. Graham, R. Grundkiewicz, B. Haddow, M. Huck, E. Joanis, T. Kocmi, P. Koehn, C. Lo, N. Ljubesic, C. Monz, M. Morishita, M. Nagata, T. Nakazawa, S. Pal, M. Post, and M. Zampieri (2020) Findings of the 2020 conference on machine translation (wmt20). In WMT@EMNLP, Cited by: §3.1, §3.2.
  • F. Böhm, Y. Gao, C. M. Meyer, O. Shapira, I. Dagan, and I. Gurevych (2019) Better rewards yield better summaries: learning to summarise without references. In

    Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

    ,
    Hong Kong, China, pp. 3110–3120. External Links: Link, Document Cited by: §1.
  • S. Cao, N. Kitaev, and D. Klein (2020) Multilingual alignment of contextual word representations. In International Conference on Learning Representations, External Links: Link Cited by: §2.1.
  • P. Colombo, G. Staerman, C. Clavel, and P. Piantanida (2021) Automatic text evaluation through the lens of Wasserstein barycenters. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, pp. 10450–10466. External Links: Link Cited by: §1.
  • S. Dev and J. M. Phillips (2019) Attenuating bias in word vectors. In AISTATS, Cited by: §A.1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. External Links: Link, Document Cited by: §2.1.
  • Z. Dou and G. Neubig (2021) Word alignment by fine-tuning embeddings on parallel corpora. In EACL, Cited by: §2.4.
  • M. Dreyer and D. Marcu (2012) HyTER: meaning-equivalent semantics for translation evaluation. In NAACL, Cited by: §5.
  • S. Duwal, A. Manandhar, S. Maskey, and S. Hada (2019) Efforts in the development of an augmented english-nepali parallel corpus. Technical report Kathmandu University. Cited by: §A.3.
  • C. Dyer, V. Chahuneau, and N. A. Smith (2013) A simple, fast, and effective reparameterization of ibm model 2. In HLT-NAACL, Cited by: §A.1, §2.4.
  • F. Feng, Y. Yang, D. M. Cer, N. Arivazhagan, and W. Wang (2020) Language-agnostic bert sentence embedding. ArXiv abs/2007.01852. Cited by: §5.
  • M. Fomicheva, S. Sun, E. Fonseca, F. Blain, V. Chaudhary, F. Guzmán, N. Lopatina, L. Specia, and A. F. T. Martins (2020a) MLQE-pe: a multilingual quality estimation and post-editing dataset. ArXiv abs/2010.04480. Cited by: §3.3.
  • M. Fomicheva, S. Sun, L. Yankovskaya, F. Blain, F. Guzmán, M. Fishel, N. Aletras, V. Chaudhary, and L. Specia (2020b) Unsupervised quality estimation for neural machine translation. Transactions of the Association for Computational Linguistics 8, pp. 539–555. Cited by: §5.
  • M. Fomicheva, P. Lertvittayakumjorn, W. Zhao, S. Eger, and Y. Gao (2021) The Eval4NLP shared task on explainable quality estimation: overview and results. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, Punta Cana, Dominican Republic, pp. 165–178. External Links: Link Cited by: §1.
  • M. Fomicheva, L. Specia, and F. Guzmán (2020) Multi-hypothesis machine translation evaluation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 1218–1232. External Links: Link, Document Cited by: §2.4, §5.
  • M. Freitag, G. Foster, D. Grangier, V. Ratnakar, Q. Tan, and W. Macherey (2021) Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation. Transactions of the Association for Computational Linguistics 9, pp. 1460–1474. External Links: ISSN 2307-387X, Document, Link, https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00437/1979261/tacl_a_00437.pdf Cited by: §3.1.
  • M. Freitag, R. Rei, N. Mathur, C. Lo, C. Stewart, G. Foster, A. Lavie, and O. Bojar (2021) Results of the WMT21 metrics shared task: evaluating metrics with expert-based human evaluations on TED and news domain. In Proceedings of the Sixth Conference on Machine Translation, Online, pp. 733–774. External Links: Link Cited by: §3.
  • T. Gao, X. Yao, and D. Chen (2021) SimCSE: simple contrastive learning of sentence embeddings. In EMNLP, Cited by: §2.5.
  • Y. Gao, W. Zhao, and S. Eger (2020)

    SUPERT: towards new frontiers in unsupervised evaluation metrics for multi-document summarization

    .
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 1347–1354. External Links: Link, Document Cited by: §2.4.
  • M. Guo, Q. Shen, Y. Yang, H. Ge, D. M. Cer, G. Ábrego, K. Stevens, N. Constant, Y. Sung, B. Strope, and R. Kurzweil (2018) Effective parallel corpus mining using bilingual sentence embeddings. In WMT, Cited by: §3.1.
  • J. He, J. Gu, J. Shen, and M. Ranzato (2020) Revisiting self-training for neural sequence generation. In Proceedings of ICLR, External Links: Link Cited by: §A.3, §1.
  • A. Joulin, E. Grave, P. Bojanowski, and T. Mikolov (2017) Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Valencia, Spain, pp. 427–431. External Links: Link Cited by: §A.2.
  • M. Kaster, W. Zhao, and S. Eger (2021) Global explainability of BERT-based evaluation metrics by disentangling along linguistic factors. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, pp. 8912–8925. External Links: Link Cited by: §2.3.
  • P. Keung, J. Salazar, Y. Lu, and N. A. Smith (2021) Unsupervised bitext mining and translation via self-trained contextual embeddings. Transactions of the Association for Computational Linguistics 8, pp. 828–841. Cited by: §3.2.
  • P. Koehn (2005) Europarl: a parallel corpus for statistical machine translation. In MT summit, Cited by: §A.1.
  • M. J. Kusner, Y. Sun, N. I. Kolkin, and K. Q. Weinberger (2015) From word embeddings to document distances. In ICML, Cited by: §2.1, §2.4.
  • I. Kvapilíková, M. Artetxe, G. Labaka, E. Agirre, and O. Bojar (2020) Unsupervised multilingual sentence embeddings for parallel corpus mining. In ACL, Cited by: §3.1, §5.
  • G. Lample, M. Ott, A. Conneau, L. Denoyer, and M. Ranzato (2018) Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 5039–5049. External Links: Link, Document Cited by: §1, §5.
  • C. Lin (2004) ROUGE: a package for automatic evaluation of summaries. In Text Summarization Branches Out, Barcelona, Spain, pp. 74–81. External Links: Link Cited by: §1.
  • K. Marchisio, K. Duh, and P. Koehn (2020) When does unsupervised machine translation work?. In Proceedings of the Fifth Conference on Machine Translation, Online, pp. 571–583. External Links: Link Cited by: §4.
  • B. Marie, A. Fujita, and R. Rubino (2021) Scientific credibility of machine translation research: a meta-evaluation of 769 papers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, pp. 7297–7306. External Links: Link, Document Cited by: §1.
  • N. Mathur, T. Baldwin, and T. Cohn (2020) Tangled up in BLEU: reevaluating the evaluation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 4984–4997. External Links: Link, Document Cited by: §1.
  • N. Mathur, J. Wei, M. Freitag, Q. Ma, and O. Bojar (2020) Results of the WMT20 metrics shared task. In Proceedings of the Fifth Conference on Machine Translation, Online, pp. 688–725. External Links: Link Cited by: §3.
  • T. Mikolov, Q. V. Le, and I. Sutskever (2013) Exploiting similarities among languages for machine translation. ArXiv abs/1309.4168. Cited by: §A.1.
  • K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002) Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pp. 311–318. External Links: Document, Link Cited by: §1.
  • A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. In OpenAI Blog, Cited by: §2.1.
  • T. Ranasinghe, C. Orasan, and R. Mitkov (2020a) TransQuest at wmt2020: sentence-level direct assessment. In Proceedings of the Fifth Conference on Machine Translation, Cited by: §3.3.
  • T. Ranasinghe, C. Orasan, and R. Mitkov (2020b) TransQuest: translation quality estimation with cross-lingual transformers. In Proceedings of the 28th International Conference on Computational Linguistics, Cited by: §3.3.
  • T. Ranasinghe, C. Orasan, and R. Mitkov (2021) An exploratory analysis of multilingual word-level quality estimation with cross-lingual transformers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Online, pp. 434–440. External Links: Document, Link Cited by: §1.
  • S. Ranathunga, E. A. Lee, M. P. Skenduli, R. Shekhar, M. Alam, and R. Kaur (2021) Neural machine translation for low-resource languages: a survey. External Links: 2106.15115 Cited by: §4.
  • R. Rei, A. C. Farinha, C. Zerva, D. van Stigt, C. Stewart, P. Ramos, T. Glushkova, A. F. T. Martins, and A. Lavie (2021) Are references really needed? unbabel-IST 2021 submission for the metrics shared task. In Proceedings of the Sixth Conference on Machine Translation, Online, pp. 1030–1040. External Links: Link Cited by: §3.3.
  • R. Rei, C. Stewart, A. C. Farinha, and A. Lavie (2020) COMET: a neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 2685–2702. External Links: Document, Link Cited by: §1.
  • N. Reimers and I. Gurevych (2020) Making monolingual sentence embeddings multilingual using knowledge distillation. In EMNLP, Cited by: §2.2, §2.
  • T. Schuster, O. Ram, R. Barzilay, and A. Globerson (2019) Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 1599–1613. External Links: Link, Document Cited by: §2.1.
  • H. Schwenk, V. Chaudhary, S. Sun, H. Gong, and F. Guzmán (2021) WikiMatrix: mining 135m parallel sentences in 1620 language pairs from wikipedia. In EACL, Cited by: §A.3.
  • T. Sellam, D. Das, and A. Parikh (2020) BLEURT: learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 7881–7892. External Links: Document, Link Cited by: §1.
  • Y. Song, J. Zhao, and L. Specia (2021) SentSim: crosslingual semantic evaluation of machine translation. In NAACL, Cited by: §1, §2.3, §2, §3.1.
  • L. Specia, F. Blain, M. Fomicheva, E. Fonseca, V. Chaudhary, F. Guzmán, and A. F. T. Martins (2020) Findings of the wmt 2020 shared task on quality estimation. In WMT@EMNLP, Cited by: §3.1.
  • H. Sun, R. Wang, M. Utiyama, B. Marie, K. Chen, E. Sumita, and T. Zhao (2021) Unsupervised neural machine translation for similar and distant language pairs: an empirical study. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) 20 (1), pp. 1–17. Cited by: §4.
  • B. Thompson and M. Post (2020) Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In EMNLP, Cited by: §5.
  • C. Tran, Y. Tang, X. Li, and J. Gu (2020) Cross-lingual retrieval for iterative self-supervised training. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.), Vol. 33, pp. 2207–2219. External Links: Link Cited by: §2.4, §2.4, §3.2.
  • C. Wei, K. Shen, Y. Chen, and T. Ma (2021) Theoretical analysis of self-training with deep networks on unlabeled data. In International Conference on Learning Representations, External Links: Link Cited by: §1.
  • C. Xing, D. Wang, C. Liu, and Y. Lin (2015) Normalized word embedding and orthogonal transform for bilingual word translation. In HLT-NAACL, Cited by: §A.1.
  • Y. Yang, G. Hernandez Abrego, S. Yuan, M. Guo, Q. Shen, D. Cer, Y. Sung, B. Strope, and R. Kurzweil (2019) Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax. In

    Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19

    ,
    pp. 5370–5378. External Links: Document, Link Cited by: §5.
  • W. Yuan, G. Neubig, and P. Liu (2021) BARTScore: evaluating generated text as text generation. In Thirty-Fifth Conference on Neural Information Processing Systems, External Links: Link Cited by: §1, §1, §5, §6.
  • T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi (2020) BERTScore: evaluating text generation with bert. In International Conference on Learning Representations, External Links: Link Cited by: §1, §1, §2.3, §6.
  • W. Zhao, G. Glavaš, M. Peyrard, Y. Gao, R. West, and S. Eger (2020) On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In ACL, Cited by: §A.1, §A.1, §A.1, §1, §1, §1, §2.1, §2.1, §2, §3.1.
  • W. Zhao, M. Peyrard, F. Liu, Y. Gao, C. M. Meyer, and S. Eger (2019) MoverScore: text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 563–578. External Links: Document, Link Cited by: §1, §1.
  • W. Zhao, M. Strube, and S. Eger (2022) DiscoScore: evaluating text generation with bert and discourse coherence. External Links: 2201.11176 Cited by: §1.

Appendix A Appendix

a.1 CLP and UMD

Procrustes alignment: Mikolov et al. (2013)

propose to compute a linear transformation matrix

which can be used to map a vector of a source word into the target language subspace by computing . The transformation can be computed by solving the problem

(7)

Here are matrices with embeddings of source and target words, respectively, where the tuples come from parallel word pairs. XMoverScore constrains

to be an orthogonal matrix such that

, since this can lead to further improvements (Xing et al., 2015). Zhao et al. (2020) call this remapping Linear Cross-Lingual Projection remapping (CLP).

De-biasing: The second remapping method of XMoverScore is rooted in the removal of biases from word embeddings. Dev and Phillips (2019)

explore a bias attenuation technique called Universal Language Mismatch-Direction (UMD). It involves a bias vector

, which is supposed to capture the bias direction. For each word embedding , an updated word embedding is computing by subtracting their projections onto , as in

(8)

where is the dot product. To obtain the bias vector , Dev and Phillips (2019) use a set of word pairs that should be de-biased (e.,g. man and woman). The subtractions of the embeddings of the words in each pair are then stacked to form a matrix , and the bias vector is its top-left singular vector. Zhao et al. (2020) use the same approach for XMoverScore, but instead consists of parallel word pairs.

Zhao et al. (2020) show that these remapping methods lead to substantial improvements of their XMoverScore metric (on average, up to 10 points in correlation). The required parallel word pairs were extracted from sentences of the EuroParl corpus (Koehn, 2005) using the fast-align (Dyer et al., 2013) word alignment tool. The best results were obtained when remapping on 2k parallel sentences.

Case Source Target
Top-WMD Uruguay belegt mit vier Punkten nur Platz Sieben. Russia was second with four gold and 13 medals.
Top-WMD Soweit lautet zumindest die Theorie. That, at least, is the theory.
Rnd-WMD Die USA stellen etwa 17.000 der insgesamt 47.000 ausländischen Soldaten in Afghanistan. Currently, there are about 170,000 U.S. troops in Iraq and 26,000 in Afghanistan.
Rnd-WMD “Das ist eine schwierige Situation”, sagte Kaczynski. “It seemed like a ridiculous situation,” Vanderjagt said.
Top-Cos Die Wahlen für ein neues Parlament sollen dann Anfang Januar stattfinden. Parliamentary elections are to be held by January.
Top-Cos Anzeichen für die Blauzungenkrankheit sind Fieber, Entzündungen und Blutungen an der Zunge der Tiere. Contact with the creatures can cause itching, rashes, conjunctivitis and, in some cases, breathing problems.
Rnd-Cos Riesen-Wirbel an der Universität Zagreb: An der wirtschaftlichen Fakultät und am Institut für Verkehrsstudien durchsuchen Polizisten die Büros von Dozenten. Those attending the Soil Forensics International Conference work in the fields of science, policing, forensic services as well as private industries.
Rnd-Cos Frankfurt soll WM-Finale der Frauen ausrichten The women’s tournament gets underway on Sunday.
Table 2: Pseudo-Parallel data obtained via UScoreWMD and UScoreCOS; top and random sentence pairs.
Supervised Metrics (TYPE-1/2) de-en en-ru ru-en ro-en cs-en fi-en tr-en
BLEU 45.39 55.08 46.33 47.09 53.80 39.92 47.15
MonoTransQuest 61.65 66.69 63.32 62.36 67.67 68.33 62.71
COMET-QE 65.73 71.91 69.71 66.40 67.98 64.83 53.73
Supervised Metrics (TYPE-3)
XMoverScore (UMD) 43.46 62.16 60.52 47.88 58.83 43.52 33.34
XMoverScore (CLP) 45.29 63.58 56.12 54.24 58.89 51.40 42.14
SentSim (BERTScore) 48.49 50.37 57.89 55.06 59.08 46.66 45.47
SentSim (WMD) 47.78 48.49 56.19 54.48 56.87 46.00 44.84
DistilScore 40.62 41.21 43.16 51.22 49.51 39.65 41.14
Fully-Unsupervised Metrics
UScorewmd 45.27 60.31 60.37 55.18 57.12 52.97 42.44
UScorecos 26.63 40.51 30.91 41.49 40.20 31.60 31.08
UScorewmd  cos 46.61 62.30 60.97 58.48 59.30 52.67 45.59
Table 3: Segment-level Pearson’s r correlations with human judgments on the WMT-16 dataset.
Supservised Metrics (TYPE-1/2) cs-en de-en fi-en lv-en ru-en tr-en zh-en
BLEU 41.22 41.29 56.48 39.28 45.99 53.06 51.75
MonoTransQuest 60.93 63.54 65.33 72.01 59.56 75.91 68.03
COMET-QE 68.99 69.34 72.83 64.75 69.06 68.43 68.01
Supervised Metrics (TYPE-3)
XMoverScore (UMD) 41.72 51.19 56.61 56.11 47.24 46.52 57.73
XMoverScore (CLP) 47.76 50.04 62.22 63.95 48.79 52.97 59.88
SentSim (BERTScore) 49.90 52.26 57.85 57.42 55.10 56.84 59.59
SentSim (WMD) 47.62 50.42 56.59 56.91 53.42 56.24 58.86
DistilScore 46.42 45.64 54.03 55.51 54.13 54.04 50.89
Fully-Unsupervised Metrics
UScorewmd 46.70 52.71 61.91 59.22 49.10 50.06 54.95
UScorecos 38.89 41.92 39.77 48.95 37.27 48.83 43.10
UScorewmd  cos 50.67 56.50 61.86 64.54 53.17 57.31 58.80
Table 4: Segment-level Pearson’s r correlations with human judgments on the WMT-17 dataset.
Supervised Metrics (TYPE-1/2) en-de en-zh ru-en ro-en et-en ne-en si-en
MonoTransQuest 41.85 45.76 76.76 88.81 73.19 75.70 61.90
COMET-QE 36.03 30.70 49.33 64.95 63.45 57.46 47.85
Supervised Metrics (TYPE-3)
XMoverScore (UMD) 16.56 16.48 28.07 65.83 53.95 38.23 18.81
XMoverScore (CLP) 25.59 20.25 20.31 57.34 58.46 25.15 16.74
SentSim (BERTScore) 6.15 22.23 47.30 78.55 55.09 57.09 51.14
SentSim (WMD) 3.86 22.62 47.46 77.72 54.60 57.00 49.79
DistilScore 12.96 28.68 45.34 76.57 51.16 46.73 41.76
Fully-Unsupservised Metrics
UScorewmd 24.53 21.58 16.66 60.30 54.83 28.62 11.38
UScorecos 13.62 13.27 39.09 66.50 41.75 46.83 41.49
UScorewmd  cos 24.67 21.63 27.83 71.48 57.62 45.70 35.25
Table 5: Segment-level Pearson’s r correlations with human judgments on the MLQE-PE dataset.
MQM-Newstest-2020 Eval4NLP-2021
Supervised Metrics (TYPE-1/2) en-de zh-en de-zh ru-de
BLEU 17.94 21.58
MonoTransQuest 31.38 53.07 34.66 49.76
COMET-QE 40.94 50.84 33.51 46.49
Supervised Metrics (TYPE-3)
XMoverScore (UMD) 12.72 27.41 10.85 48.35
XMoverScore (CLP) 12.89 28.29 21.61 52.25
SentSim (BERTScore) 1.03 26.39 -7.71 60.91
SentSim (WMD) -0.23 24.70 -11.54 60.42
DistilScore -2.66 7.06 6.57 51.22
Fully-Unsupervised Metrics
UScorewmd 18.13 28.43 29.29 44.63
UScorecos -5.93 4.99 0.86 39.91
UScorewmd  cos 13.94 25.17 24.66 55.08
Table 6: Segment-level Pearson’s r correlations with human judgments on the WMT-MQM and Eval4NLP datasets.

a.2 Filtering

We first remove all sentences from each monolingual corpus for which the fastText language identification tool (Joulin et al., 2017) predicts a different language. We then filter all sentences which are shorter than 3 tokens or longer than 30 tokens. As the last step, we discard sentence pairs sharing substantial lexical overlap, which prevents degenerate alignments of, e.g., proper names. We remove all sentence pairs for which the Levenshtein distance detects an overlap of over 50%.

a.3 Fine-Tuning on Parallel Data

To examine whether and by how much we can further improve our metrics using forms of supervision, we experiment with a fine-tuning step on parallel sentences and treat self-learning on pseudo-parallel data as pre-training (He et al., 2020). We use the parallel data to fine-tune the contrastive sentence embeddings of UScorecos and the MT system of UScorewmd, which is responsible for generating pseudo references. Further, we also compute new remapping matrices for UScorewmd. Since CLP is superior to UMD when parallel data is used (see Section 3.2), we compute these remapping matrices using CLP instead of UMD. To assess how different amounts of parallel sentences affect performance, we fine-tune our metrics on 10k, 20k, 30k, and 200k parallel sentences. We use WikiMatrix (Schwenk et al., 2021) and the Nepali Translation Parallel Corpus (Duwal et al., 2019) to obtain parallel sentences.

10k 20k 30k 200k
Avg 40.60 42.95 44.42 45.17 46.57
Figure 6: Pearson’s r correlations on MLQE-PE for UScorewmd  cos when fine-tuning on limited amounts of parallel data. We explore sample sizes of 10k, 20k, 30k, and 200k.

Pearson’s r correlations with human judgments for individual and averaged language pairs are shown in Figure 6; we focus on MLPQE-PE, where our metrics performed worst. Overall, introducing parallel data into the training process consistently improves performance for the majority of language directions; more parallel data leads to better results. The relatively biggest improvements are achieved for the si-en language direction, which is in accordance with our discussion above. When fine-tuning with 30k parallel sentences, the performance of our metrics is roughly on par with the SentSim variants (see Table 1). With 200k parallel sentences, our metrics clearly outperform SentSim, which uses millions of parallel sentences and NLI data as supervision signals.