Text Simplification transforms natural language from a complex to a simple format, with the aim to not only reach wider audiences (Rello et al., 2013; De Belder and Moens, 2010; Aluisio et al., 2010; Inui et al., 2003) but also as a preprocessing step in related tasks (Shardlow, 2014; Silveira and Branco, 2012).
Simplifications are achieved by using parallel datasets to train sequence-to-sequence text generation algorithms(Nisioi et al., 2017) to make complex sentences easier to understand. They are typically produced by crowdsourcing (Xu et al., 2016; Alva-Manchego et al., 2020a) or by alignment Cao et al. (2020); Jiang et al. (2020). They are infamously noisy and models trained on these give poor results when evaluated by humans (Cooper and Shardlow, 2020). In this paper we add to the growing narrative around the evaluation of natural language generation van der Lee et al. (2019); Caglayan et al. (2020); Pang (2019), focusing on parallel text simplification datasets and how they can be improved.
Why do we need to re-evaluate TS resources?
In the last decade, TS research has relied on Wikipedia-based datasets (Zhang and Lapata, 2017; Xu et al., 2016; Jiang et al., 2020), despite their known limitations Xu et al. (2015); Alva-Manchego et al. (2020a) such as questionable sentence pairs alignments, inaccurate simplifications and a limited variety of simplification modifications. Apart from affecting the reliability of models trained on these datasets, their low quality influences the evaluation relying on automatic metrics that requires gold-standard simplifications, such as SARI (Xu et al., 2016) and BLEU (Papineni et al., 2001).
Hence, evaluation data resources must be further explored and improved to achieve reliable evaluation scenarios. There is a growing body of evidence (Xu et al., 2015) (including this work) to show that existing datasets do not contain accurate and well-constructed simplifications, significantly impeding the progress of the TS field.
Furthermore, well-known evaluation metrics such as BLEU are not suitable for simplification evaluation. According to previous research(Sulem et al., 2018) BLEU does not significantly correlate with simplicity (Xu et al., 2016), making it inappropriate for TS evaluation. Moreover, it does not correlate (or the correlation is low) with grammaticality and meaning preservation when performing syntactic simplification such as sentence splitting. Therefore in most recent TS research BLEU has not been considered as a reliable evaluation metric. We use SARI as the preferred method for TS evaluation, which has also been used as the standard evaluation metric in all the corpora analysed in this research.
Our contributions include 1) the analysis of the most common TS corpora based on quantifying modifications used for simplification, evidencing their limitations and 2) an empirical study on TS models performance by using better-distributed datasets. We demonstrate that by improving the distribution of TS datasets, we can build TS models that gain a higher SARI score in our evaluation setting.
2 Related Work
The exploration of neural networks in TS started with the work ofNisioi et al. (2017), using the largest parallel simplification resource available Hwang et al. (2015)2017), adversarial training (Surya et al., 2019), pointer-copy mechanism (Guo et al., 2018), neural semantic encoders (Vu et al., 2018) and transformers supported by paraphrasing rules (Zhao et al., 2018).
Other successful approaches include the usage of control tokens to tune the level of simplification expected (Alva-Manchego et al., 2020a; Scarton and Specia, 2018) and the prediction of operations using parallel corpora (Alva-Manchego et al., 2017; Dong et al., 2020). The neural methods are trained mostly on Wikipedia-based sets, varying in size and improvements in the quality of the alignments.
Xu et al. (2015) carried out a systematic study on Wikipedia-based simplification resources, claiming Wikipedia is not a quality resource, based on the observed alignments and the type of simplifications. Alva-Manchego et al. (2020a) proposed a new dataset, performing a detailed analysis including edit distance and proportion of words that are deleted, inserted and reordered, and evaluation metrics performance for their proposed corpus.
Chasing the state-of-the-art is rife in NLP Hou et al. (2019), and no less so in TS, where a SARI score is too often considered the main quality indicator. However, recent work has shown that these metrics are unreliable Caglayan et al. (2020) and gains in performance according to them may not deliver improvements in simplification performance when the text is presented to an end user.
Comparison of TS datasets with respect to the number of edit operations between the original and simplified sentences. X-axis: token edit distance normalised by sentence length, Y-axis: probability density for the change percentage between complex and simple sentence pairs.
3 Simplification Datasets: Exploration
3.1 Data and Methods
In the initial exploration of TS datasets, we investigated the training, test and validation subsets (when available) of the following: WikiSmall and WikiLarge (Zhang and Lapata, 2017), TurkCorpus (Xu et al., 2015), MSD dataset (Cao et al., 2020), ASSET (Alva-Manchego et al., 2020a) and WikiManual (Jiang et al., 2020). For the WikiManual dataset, we only considered sentences labelled as “aligned”.
We computed the number of changes between the original and simplified sentences through the token edit distance. Traditionally, edit distance quantifies character-level changes from one character string to another (additions, deletions and replacements). In this work, we calculated the token-based edit distance by adapting the Wagner–Fischer algorithm (Wagner and Fischer, 1974) to determine changes at a token level. We preprocessed our sentences by changing them into lowercase prior to this analysis. To make the results comparable across sentences, we divide the number of changes by the length of the original sentence and obtain values between 0% (no changes) to 100% (completely different sentence).
In addition to toked-based edit operation experiments, we analysed the difference of sentence length between complex and simple variants, the quantity of edit operations type (INSERT, DELETE and REPLACE) and an analysis of redundant operations such as deletions and insertions in the same sentence over the same text piece (we define this as the MOVE operation). Based on our objective to show how different split configurations affect TS model performance, we have presented the percentage of edit operations as the more informative analysis performed on the most representative datasets.
3.2 Edit Distance Distribution
Except for the recent work of Alva-Manchego et al. (2020b), there has been little work on new TS datasets. Most prior datasets are derived by aligning English and Simple English Wikipedia, for example WikiSmall and WikiLarge (Zhang and Lapata, 2017).
In Figure 1 we can see that the edit distance distribution of the splits in the selected datasets is not even. By comparing the test and development subsets in WikiSmall (Figure 0(a)) we can see differences in the number of modifications involved in simplification. Moreover, the WikiLarge dataset (Figure 0(b)) shows a complete divergence of the test subset. Additionally, it is possible to notice a significant number of unaligned or noisy cases, between the 80% and 100% of change in the WikiLarge training and validation subsets (Figure 0(b)).
We manually checked a sample of these cases and confirmed they were poor-quality simplifications, including incorrect alignments. The simplification outputs (complex/simple pairs) were sorted by their edit distances and then manually checked to determine an approximate heuristic for noisy sentences detection. Since many of these alignments had really poor quality, it was easy to determine the number that removed a significant number of cases without actually reducing dramatically the size of the dataset.
Datasets such as Turk Corpus (Xu et al., 2015) are widely used for evaluation and their operations mostly consist of lexical simplification (Alva-Manchego et al., 2020a). We can see this behaviour in Figure 0(c), where most edits involve a small percentage of the tokens. This can be noticed when a large proportion of the sample cases are between 0% (no change) to 40%.
In the search of better evaluation resources, TurkCorpus was improved with the development of ASSET (Alva-Manchego et al., 2020a) including more heterogeneous modification measures. As we can see in Figure 0(e), the data are more evenly distributed than in Figure 0(c).
Recently proposed datasets, such as WikiManual (Jiang et al., 2020), as shown in Figure 0(f), have an approximately consistent distribution, and their simplifications are less conservative. Based on a visual inspection on the uppermost values of the distribution (80%), we can tell that often most of the information in the original sentence is removed or the target simplification does not express accurately the original meaning.
MSD dataset (Cao et al., 2020) is a domain-specific dataset, developed for style transfer in the health domain. In the style transfer setting, the simplifications are aggressive (i.e., not limited to individual words), to promote the detection of a difference between one style (expert language) and another (lay language). Figure 0(d) shows how their change-percentage distribution differs dramatically in comparison to the other datasets, placing most of the results at the right-side of the distribution.
Among TS datasets, it is important to mention that the raw text of the Newsela (Xu et al., 2015) dataset was produced by professional writers and is likely of higher quality than other TS datasets. Unfortunately, it is not aligned at the sentence level by default and its usage and distribution are limited by a restrictive data agreement. We have not included this dataset in our analysis due to the restrictive licence under which it is distributed.
3.3 KL Divergence
Specifically, we compared the distribution of the test set to the development and training sets for WikiSmall, WikiLarge, WikiManual, TurkCorpus and ASSET Corpus (when available). We did not include MSD dataset since it only has a testing set.
We performed randomised permutation tests (Morgan, 2006) to confirm the statistical significance of our results. Each dataset was joined together and split randomly for 100,000 iterations. We then computed the -value as a percentage of random splits that result in the KL value equal to or higher than the one observed in the data. Based on the
-value, we can decide whether the null hypothesis (i.e. that the original splits are truly random) can be accepted. We reject the hypothesis for-value lower than 0.05. In Table 1 we show the computed KL-divergence and -values. The -values below 0.05 for WikiManual and WikiLarge confirm that these datasets do not follow a truly random distribution.
4 Simplification Datasets: Experiments
We carried out the following experiments to evaluate the variability in performance of TS models caused by the issues described in Wiki-based data.
4.1 Data and Methods
For the proposed experiments, we used the EditNTS model, a Programmer-Interpreter Model (Dong et al., 2020). Although the original code was published, its implementation required minor modifications to run in our setting. The modifications performed, the experimental subsets as well as the source code are documented via GitHub111https://github.com/lmvasque/ts-explore. We selected EditNTS model due to its competitive performance in both WikiSmall and WikiLarge datasets222https://github.com/sebastianruder/NLP-progress/blob/master/english/simplification.md. Hence, we consider this model as a suitable candidate for evaluating the different limitations of TS datasets. In future work, we will definitely consider testing our assumptions under additional metrics and models.
In relation to TS datasets, we trained our models on the training and development subsets from WikiLarge and WikiSmall, widely used in most of TS research. In addition, these datasets have a train, development and test set, which is essential for retraining and testing the model with new split configurations. The model was first trained with the original splits, and then with the following variations:
Randomised split: as explained in Section 3.3, the original WikiLarge split does not have an even distribution of edit-distance pairs between subsets. For this experiment, we resampled two of our datasets (WikiSmall and WikiLarge). For each dataset, we joined all subsets together and performed a new random split.
Refined and randomised split: we created subsets that minimise the impact of poor alignments. These alignments were selected by edit distance and then subsets were randomised as above. We presume that the high-distance cases correspond to noisy and misaligned sentences. For both WikiSmall and WikiLarge, we reran our experiments removing 5% and 2% of the worst alignments.
Finally, we evaluated the models by using the test subsets of external datasets, including: TurkCorpus, ASSET and WikiManual.
Figure 2 shows the results for WikiSmall. We can see a minor decrease in SARI score with the random splits, which means that the noisy alignments were equivalently present in all the sets rather than using the best cases for training. On the other hand, when the noisy cases are removed from the datasets the increase in model performance is clear.
Likewise, we show WikiLarge results in Figure 3. When the data is randomly distributed, we obtain better performance than the original splits. This is consistent with WikiLarge having the largest discrepancy according to our KL-divergence measurements, as shown in Section 3.3. We also found that the 95% split gave a similar behaviour to WikiLarge Random. Meanwhile, the 98% dataset, gave a similar performance to the original splits for ASSET and TurkCorpus333ASSET and Turk Corpus results are an average on their multiple references scores..
We can also note, that although there is a performance difference between WikiSmall Random and WikiSmall 95%, in WikiLarge the same splits have quite similar results. We believe these discrepancies are related to the size and distribution of the training sets. WikiLarge subset is three times bigger than WikiSmall in the number of simple/complex pairs. Also, WikiLarge has a higher KL-divergence (0.46) than WikiSmall (0.06), which means that WikiLarge could benefit more from a random distribution experiment than WikiSmall, resulting in higher performance on WikiLarge. Further differences may be caused by the procedures used to make the training/test splits in the original research, which were not described in the accompanying publications.
Using randomised permutation testing, we have confirmed that the SARI differences between the models based on the original split and our best alternative (95% refined) is statistically significant () for each configuration discussed above.
In this study, we have shown the limitations of TS datasets and the variations in performance in different splits configurations. In contrast, existing evidence cannot determine which is the most suitable split, especially since this could depend on each specific scenario or target audience (e.g., model data similar to “real world” applications).
Also, we have measured our results using SARI, not only because it is the standard evaluation metric in TS but also because there is no better automatic alternatives to measure simplicity. We use SARI as a way to expose and quantify SOTA TS datasets limitations. The increase in SARI scores should be interpreted as the variability in the relative quality of the output simplifications. By relative we mean, that there is a change in simplicity gain but we cannot state the simplification is at its best quality since the metric itself has its own weaknesses.
In this paper, we have shown 1) the statistical limitations of TS datasets, and 2) the relevance of subset distribution for building more robust models. To our knowledge, distribution-based TS datasets analysis has not been considered before. We hope that the exposure of these limitations kicks off a discussion in the TS community on whether we are in the correct direction regarding evaluation resources in TS and more widely in NLG. The creation of new resources is expensive and complex, however, we have shown that current resources can be refined, motivating future studies in the field of TS.
We would like to thank Nhung T.H. Nguyen and Jake Vasilakes for their valuable discussions and comments. Laura Vásquez-Rodríguez’s work was funded by the Kilburn Scholarship from the University of Manchester. Piotr Przybyła’s work was supported by the Polish National Agency for Academic Exchange through a Polish Returns grant number PPN/PPO/2018/1/00006.
- Aluisio et al. (2010) Sandra Aluisio, Lucia Specia, Caroline Gasperin, and Carolina Scarton. 2010. Readability assessment for text simplification. Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1–9.
Alva-Manchego et al. (2017)
Fernando Alva-Manchego, Joachim Bingel, Gustavo H Paetzold, Carolina Scarton,
and Lucia Specia. 2017.
Learning How to Simplify From
Explicit Labeling of Complex-Simplified Text Pairs.
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 295–305.
- Alva-Manchego et al. (2020a) Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, Benoît Sagot, and Lucia Specia. 2020a. ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations. arXiv.
- Alva-Manchego et al. (2020b) Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, Benoît Sagot, and Lucia Specia. 2020b. ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4668–4679, Online. Association for Computational Linguistics.
- Caglayan et al. (2020) Ozan Caglayan, Pranava Madhyastha, and Lucia Specia. 2020. Curious case of language generation evaluation metrics: A cautionary tale. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2322–2328, Barcelona, Spain (Online). International Committee on Computational Linguistics.
- Cao et al. (2020) Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, and Tat-Seng Chua. 2020. Expertise Style Transfer: A New Task Towards Better Communication between Experts and Laymen. In arXiv, pages 1061–1071. Association for Computational Linguistics (ACL).
- Cooper and Shardlow (2020) Michael Cooper and Matthew Shardlow. 2020. CombiNMT: An exploration into neural text simplification models. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5588–5594, Marseille, France. European Language Resources Association.
- De Belder and Moens (2010) Jan De Belder and Marie-Francine Moens. 2010. Text Simplification for Children. Proceedings of the SIGIR Workshop on Accessible Search Systems, pages 19–26.
- Dong et al. (2020) Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2020. Editnts: An neural programmer-interpreter model for sentence simplification through explicit editing. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 3393–3402. Association for Computational Linguistics (ACL).
- Guo et al. (2018) Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Dynamic Multi-Level Multi-Task Learning for Sentence Simplification. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018), pages 462–476.
- Hou et al. (2019) Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, and Debasis Ganguly. 2019. Identification of tasks, datasets, evaluation metrics, and numeric scores for scientific leaderboards construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5203–5213, Florence, Italy. Association for Computational Linguistics.
- Hwang et al. (2015) William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning sentences from standard Wikipedia to simple Wikipedia. In NAACL HLT 2015 - 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, pages 211–217. Association for Computational Linguistics (ACL).
- Inui et al. (2003) Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu Iida, and Tomoya Iwakura. 2003. Text Simplification for Reading Assistance: A Project Note. In Proceedings of the Second International Workshop on Paraphrasing - Volume 16, PARAPHRASE ’03, pages 9–16, USA. Association for Computational Linguistics (ACL).
- Jiang et al. (2020) Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF Model for Sentence Alignment in Text Simplification. In arXiv, pages 7943–7960. arXiv.
- Kullback and Leibler (1951) S. Kullback and R. A. Leibler. 1951. On Information and Sufficiency. The Annals of Mathematical Statistics, 22(1):79–86.
- van der Lee et al. (2019) Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, pages 355–368, Tokyo, Japan. Association for Computational Linguistics.
- Morgan (2006) William Morgan. 2006. Statistical Hypothesis Tests for NLP or: Approximate Randomization for Fun and Profit.
- Nisioi et al. (2017) Sergiu Nisioi, Sanja Štajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text simplification models. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), volume 2, pages 85–91. Association for Computational Linguistics (ACL).
- Pang (2019) Richard Yuanzhe Pang. 2019. The Daunting Task of Real-World Textual Style Transfer Auto-Evaluation. arXiv.
- Papineni et al. (2001) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. BLEU: a method for automatic evaluation of machine translation. ACL, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics(July):311–318.
- Rello et al. (2013) Luz Rello, Ricardo Baeza-Yates, Stefan Bott, and Horacio Saggion. 2013. Simplify or help? Text simplification strategies for people with dyslexia. In W4A 2013 - International Cross-Disciplinary Conference on Web Accessibility.
- Scarton and Specia (2018) Carolina Scarton and Lucia Specia. 2018. Learning simplifications for specific target audiences. In ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), volume 2, pages 712–718, Stroudsburg, PA, USA. Association for Computational Linguistics.
- Shardlow (2014) Matthew Shardlow. 2014. A Survey of Automated Text Simplification. International Journal of Advanced Computer Science and Applications, 4(1).
Silveira and Branco (2012)
Sara Botelho Silveira and António Branco. 2012.
Enhancing multi-document summaries with sentence simplification.
Proceedings of the 2012 International Conference on Artificial Intelligence, ICAI 2012, volume 2, pages 742–748.
- Sulem et al. (2018) Elior Sulem, Omri Abend, and Ari Rappoport. 2018. BLEU is Not Suitable for the Evaluation of Text Simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 738–744, Stroudsburg, PA, USA. Association for Computational Linguistics.
- Surya et al. (2019) Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain, and Karthik Sankaranarayanan. 2019. Unsupervised Neural Text Simplification. ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 2058–2068.
- Vu et al. (2018) Tu Vu, Baotian Hu, Tsendsuren Munkhdalai, and Hong Yu. 2018. Sentence simplification with memory-augmented neural networks. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, volume 2, pages 79–85. Association for Computational Linguistics (ACL).
- Wagner and Fischer (1974) Robert A. Wagner and Michael J. Fischer. 1974. The String-to-String Correction Problem. Journal of the ACM (JACM), 21(1):168–173.
- Xu et al. (2015) Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in Current Text Simplification Research: New Data Can Help. Transactions of the Association for Computational Linguistics, 3:283–297.
- Xu et al. (2016) Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing Statistical Machine Translation for Text Simplification. Transactions of the Association for Computational Linguistics, 4:401–415.
- Zhang and Lapata (2017) Xingxing Zhang and Mirella Lapata. 2017. Sentence Simplification with Deep Reinforcement Learning. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings, pages 584–594. Association for Computational Linguistics (ACL).
- Zhao et al. (2018) Sanqiang Zhao, Rui Meng, Daqing He, Saptono Andi, and Parmanto Bambang. 2018. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, pages 3164–3173. Association for Computational Linguistics.