Log In Sign Up

BERT4SO: Neural Sentence Ordering by Fine-tuning BERT

Sentence ordering aims to arrange the sentences of a given text in the correct order. Recent work frames it as a ranking problem and applies deep neural networks to it. In this work, we propose a new method, named BERT4SO, by fine-tuning BERT for sentence ordering. We concatenate all sentences and compute their representations by using multiple special tokens and carefully designed segment (interval) embeddings. The tokens across multiple sentences can attend to each other which greatly enhances their interactions. We also propose a margin-based listwise ranking loss based on ListMLE to facilitate the optimization process. Experimental results on five benchmark datasets demonstrate the effectiveness of our proposed method.


page 1

page 2

page 3

page 4


Topological Sort for Sentence Ordering

Sentence ordering is the task of arranging the sentences of a given text...

Sentence Embeddings using Supervised Contrastive Learning

Sentence embeddings encode sentences in fixed dense vectors and have pla...

Deep Attentive Ranking Networks for Learning to Order Sentences

We present an attention-based ranking framework for learning to order se...

Neural Sentence Ordering Based on Constraint Graphs

Sentence ordering aims at arranging a list of sentences in the correct o...

Graph-based Neural Sentence Ordering

Sentence ordering is to restore the original paragraph from a set of sen...

The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT

Multi-headed attention heads are a mainstay in transformer-based models....

A New Sentence Ordering Method Using BERT Pretrained Model

Building systems with capability of natural language understanding (NLU)...

1 Introduction

Sentence ordering is the task of arranging sentences into an order so as to maximize the coherence of the text Barzilay and Lapata (2008)

. This task has been widely studied due to its significance in downstream tasks, such as determining the ordering of concepts in concept-to-text generation 

Konstas and Lapata (2012, 2013)

, information from various document in extractive multi-document summarization 

Barzilay et al. (2002); Nallapati et al. (2017), and events in story generation Fan et al. (2019); Zhu et al. (2020).

Early studies on sentence ordering generally use handcrafted linguistic features to model the document structure Lapata (2003); Barzilay and Lee (2004); Barzilay and Lapata (2008), which limits the application of these systems. Recent works apply neural networks to model the text coherence and solve the sentence ordering task. Typical methods are based on pointer network Vinyals et al. (2015), which leverages attention as a pointer to select successively a member of the input sequence as the output. These methods Gong et al. (2016); Cui et al. (2018); Wang and Wan (2019); Yin et al. (2019) usually require sentence-by-sentence decoding to produce the reordered sentences. One drawback of such approaches is that the current time step prediction depends on the previous predictions, making it difficult for ordering a large set of sentences.

(2) When they arrived they saw some airplanes in the back of a truck.
(3) The kids had a hard time deciding what to ride first.
(1) The family got together to go to the fair.
(5) Finally they played the dart game.
(4) Then they played some games to win prizes.
Table 1: An example of unordered sentences in a document and their correct order is on the left.

More recently, ranking-based frameworks and pre-trained language models have been applied to sentence ordering Kumar et al. (2020); Prabhumoye et al. (2020). Different from sequential prediction, the ranking-based framework aims at predicting a global ranking score for each sentence and computing the order by sorting the scores. Sentence ordering can also benefit from the pre-trained language models (e.g., BERT Devlin et al. (2019)), which enhance sentence representations. Prabhumoye et al. (2020) used BERT to judge the relative order between sentence pairs and applies a topological sort algorithm to infer the entire order. This method achieves the state-of-the-art performance, but with a high computational cost because it enumerates all sentence pairs. Kumar et al. (2020) proposed a method where each sentence is encoded by BERT and the interaction between sentences is captured by a transformer based on the sentence representations. The limitation here is that the separately encoded sentences cannot take the cross-sentence interactions between tokens into account. However, it is common that clues of sentence orders can be revealed by enabling across-sentence tokens to attend to each other. Taking the fourth sentence in Table 1 for example, “they” usually contributes little information in an isolated sentence representation, but when connected with “kids” in the third sentence, it becomes a strong indication of the sentence orders.

In this work, we propose a new structure to capture cross-sentence interactions between tokens for Sentence Ordering by fine-tuning BERT (which is called BERT4SO), and design a new listwise objective function accordingly. Instead of encoding each sentence separately, we propose to concatenate all sentences as a long sequence and leverage multiple [CLS] tokens to represent the sentences. Every token from any sentence can attend to others and their interaction information can thus be captured. Extensive experiments on five benchmark datasets demonstrate the effectiveness of our design. Besides, to further facilitate the optimization of our method, we also propose a margin-based ListMLE which successfully improve our method on small datasets.

Figure 1: The structure of our method.

2 Methodology

Assume that we have a set of sentences with a random order , our task is to find the right order for these sentences . Following the existing work Kumar et al. (2020); Prabhumoye et al. (2020); Zhu et al. (2021), this task is framed as a ranking problem, where the model is trained to predict a score for each sentence , and the global order is determined by sorting it.

2.1 Fine-tuning BERT for Sentence Ordering

BERT Devlin et al. (2019) is a pre-trained language model trained on several large corpora. Fine-tuning BERT has been successfully used in many downstream tasks such as question answering Khashabi et al. (2020) and machine reading comprehension Liu et al. (2019); Clark et al. (2019). Since BERT is pre-trained on tasks with only one or two sentences, it cannot be directly applied to sentence ordering with multiple sentences. Existing work Kumar et al. (2020) alleviated this problem by encoding each sentence separately, but the interaction between tokens across sentences cannot be well-captured. In contrast, we propose to concatenate all sentences into a long sequence and use two segment embeddings in BERT to indicate their intervals. In so doing, each token can attend to all others so that the interaction among tokens can be captured. Moreover, after obtaining the contextual representations of sentences, we apply another multi-layer transformer to create the sentence representations at document level, which further capture the interactions among sentences. The final representation is used to calculate a score, and the global order is determined by sorting it.

Multiple Sentences Encoding As illustrated in Figure 1, we insert a [CLS] token before each sentence and a [SEP] token after each sentence, and then concatenate all sentences as a long one. In the original BERT model, the [CLS] token is used to aggregate the features from a sentence or a sentence pair. We modify it by leveraging multiple [CLS] tokens to get representations for multiple sentences. Before feeding into BERT, three embeddings are used to represent a token: (1) Token embeddings are provided by pre-trained BERT. (2) In vanilla BERT, two segment embeddings ( and ) are used to distinguish different sentences. To deal with multiple sentences within a document in our task, we propose to concatenate these segment embeddings alternatively to segment the whole sequence, e.g., for a document with three sentences , the segment embeddings will be . (3) Position embeddings are used to indicate the position of each token. The sum of the three embeddings is used as input to the BERT model. Then we use the output at the [CLS] position as the representation of each sentence.

Sentence Ordering After obtaining the sentence representations, we stack several transformer layers as document encoder to further enhance the interaction between sentences and capture the document-level features. Different from sentence encoder, the input to document encoder is the sentence representations, thus it can capture more global perspective features. Finally, for each sentence , we compute the final score by a decoder

with two multi-layer perceptrons (MLPs) based on the sentence representations computed by the document encoder.

2.2 Margin-based ListMLE

ListMLE is a listwise ranking loss based on the relative order between each sentence and its following sentences111The computation of ListMLE is given in Appendix.. It has been proven to be more effective Kumar et al. (2020) than pointwise or pairwise losses in optimizing the transformer-based sentence ordering methods. Inspired by the idea of margin maximization in pairwise ranking Joachims (2002), we incorporate the idea into listwise loss in order to avoid underfitting and achieve a better convergence rate Wang et al. (2018). For a document with unordered sentences, assuming the correct order is , we propose the following modified margin-based listwise loss:



is a hyperparameter. Eq. 

1 is the normalized score of each sentence, and Eq. 2 aims at enlarging the margin of the correctly ordered sentence in -th position, while lowering the margin of other sentences.

3 Experiments

3.1 Datasets and Evaluation Metrics

We conduct experiments on five datasets222The download links can be found in the original paper. The detailed statistics of datasets and implementation of our model are presented in Appendix..

NeurIPS/AAN/NSF abstracts Logeswaran et al. (2018). These datasets consist of abstracts from NeurIPS, ACL and NSF research award papers, including 3,259, 12,157, and 127,865 samples, respectively. The data are split into training, validation and test set according to the publication year.

SIND captions Huang et al. (2016). This is a visual story dataset with 50,200 stories. Each story contains five sentences. It is split into training, validation, and test set with the ratio of 8:1:1.

ROCStory Mostafazadeh et al. (2016). It is a commonsense story dataset with 98,161 stories. Each story comprises five sentences. We make an 8:1:1 random split on the dataset to get the training, validation and test set.

We use Kendall’s

and Perfect Match Ratio (PMR) as the evaluation metrics, both are commonly used in previous work 

Gong et al. (2016); Logeswaran et al. (2018); Kumar et al. (2020).

Kendall’s Tau (): it is one of the most frequently used metrics for text coherence evaluation Lapata (2006); Logeswaran et al. (2018). It measures how much a ranking agrees with the ground-truth.

PMR: it calculates the percentage of samples for which the entire order of the sequence is correctly predicted Chen et al. (2016).

CNN + PtrNet 0.6976 19.36 0.6700 28.75 0.4460 5.95 0.4197 9.50 0.6538 27.06
LSTM + PtrNet 0.7373 20.95 0.7394 38.30 0.5460 10.68 0.4833 12.96 0.6787 28.24
Variant-LSTM + PtrNet 0.7258 22.02 0.7521 40.67 0.5544 10.97 0.4878 13.57 0.6852 30.28
ATTOrderNet 0.7466 21.22 0.7493 40.71 0.5494 10.48 0.4823 12.27 0.7011 34.32
HierarchicalATTNet 0.7008 19.63 0.6956 30.29 0.5073 8.12 0.4814 11.01 0.6873 31.73
SE-Graph 0.7370 24.63 0.7616 41.63 0.5602 10.94 0.4804 12.58 0.6852 31.36
ATTOrderNet + TwoLoss 0.7357 23.63 0.7531 41.59 0.4918 9.39 0.4952 14.09 0.7302 40.24
RankTxNet 0.7684 26.12 0.7744 38.84 0.4899 6.81 0.5528 14.80 0.7333 30.19
B-TSort 0.7884 30.59 0.8064 48.08 0.4813 7.88 0.5632 17.35 0.7941 48.06
BERT4SO 0.7778 30.70 0.8076 45.41 0.6379 13.00 0.5916 19.07 0.8487 55.65
 with ListMLE 0.7516 24.13 0.8045 44.42 0.6344 12.78 0.5998 18.83 0.8468 55.04
Table 2: Results on five benchmark datasets. Models with are implemented by provided source code while those with are implemented by ourselves. The numbers here are our runs of the model. and

denote significant improvements with our method in t-test with

and respectively.

3.2 Baseline Models

Various Pointer Network based Methods: CNN/LSTM+PtrNet Gong et al. (2016), Variant-LSTM+PtrNet Logeswaran et al. (2018), ATTOrderNet Cui et al. (2018), HierarchicalATTNet Wang and Wan (2019), SE-Graph Yin et al. (2019), and ATTOrderNet+TwoLoss Yin et al. (2020) all adopt CNN/RNN to obtain the representation for the input sentences and employ the pointer network as the decoder to predict order.

RankTxNet Kumar et al. (2020) applies BERT and transformers to order the sentences. Different from ours, this model encodes each sentence separately and is trained by the ListMLE loss.

B-TSort Prabhumoye et al. (2020)333Note that the results of B-TSort are slightly worse than those reported in the original paper, because the provided source code does not shuffle the sentence order on test set which arbitrarily improves the results. is the state-of-the-art model in sentence ordering. The model applies BERT to judge the relative order between each pair of sentences and build a graph based on the relative ordering. The global order is inferred by a topological sort algorithm on the graph.

3.3 Experimental Results

Table 2 shows the results of all models on the five datasets. We have the following observations:

(1) BERT4SO significantly outperforms all baselines on NSF, SIND, and ROCStory and achieves comparable results with the previous best method B-TSort on NeurIPS and AAN. Specifically, on ROCStory, BERT4SO outperforms B-TSort by around 5.5%

and 7.6% PMR. This result clearly demonstrates the effectiveness and wide applicability of our proposed method. (2) RankTxNet, B-TSort, and BERT4SO apply BERT for encoding sentences. We can see that they achieve better performance than other RNN-based ones. This demonstrates the superiority of applying BERT for sentence ordering. Furthermore, BERT4SO achieves the best performance, indicating that it can better leverage BERT for sentence ordering than a naive utilisation. (3) Compared with RankTxNet, BERT4SO concatenates all sentences so that they can attend to each other in token-level. The higher performance confirms this design can better capture sentence relationship and the token-level interaction, which are beneficial to sentence ordering. (4) B-TSort performs similarly to BERT4SO on NeurIPS and AAN, showing its effectiveness on small datasets. B-TSort enumerates all sentence pairs and estimates their relative order. It leverage exhaustive pairwise ordering information for the whole set, but the process is very expensive and difficult to apply when documents contain lots of sentence. Indeed, the training time of B-TSort is extremely long (

e.g., more time than ours on NSF and more time on SIND).

Original Ensemble
NeurIPS 0.7778 30.70 0.7916 30.85
AAN 0.8076 45.41 0.8265 48.62
NSF 0.6379 13.00 0.5884 11.65
Table 3: Results of BERT4SO trained on ensemble datasets of NeurIPS, AAN and NSF.

Discussion To evaluate the performance of our proposed margin-based ListMLE, we also train BERT4SO with the traditional listMLE. The results are shown in Table 2

. We can observe that both loss functions work well on three large datasets (

i.e., NSF, SIND, and ROCStory), but BERT4SO with the traditional ListMLE cannot achieve good results on the very small dataset NeurIPS. We speculate that the improvements stem from the second term in margin-based ListMLE (Eq. 2), which adds more constraints on lowering the scores of other sentences except the current ground-truth, thus makes more effective use of each sample.

We also find the improvements of BERT4SO are limited on small datasets. To alleviate this problem, inspired by a recent study Liu et al. (2020), we combine the training set of the three similar datasets NeurIPS, AAN, and NSF and test the model on each test set, respectively. The results in Table 3 clearly show that, training on the combined dataset brings improvements for both NeurIPS and AAN because NSF provides lots of additional training data. This result confirms that BERT4SO performs well when provided with sufficient training data. On the other hand, the performance on NSF drops when combined with NeurIPS and AAN data. We believe that in this case, the addition of other datasets to NSF does not add more useful training examples, but more noise.

4 Conclusion and Future Work

In this work, we proposed a new method for sentence ordering by fine-tuning BERT and a modified ListMLE loss. Our proposed structure greatly enhanced the cross-sentence interactions, thus obtained improvements on five benchmark datasets. The newly proposed loss function is shown to be helpful on optimizing our method for small datasets. We also found that the performance on small datasets can be improved by combining similar datasets. In the future, we will investigate other methods, such as self-supervised learning, to improve our model on small datasets.


Appendix A ListMLE

ListMLE Xia et al. (2008) is a surrogate loss to the perfect order 0-1 based loss function. Given a corpus with documents, the document with unordered sentences is denoted by . Assume the correct order of is , then ListMLE is computed as:


With ListMLE, the model will assign the highest score for the first sentence and the lowest score for the the last one.

Datasets Max. Avg. Train Val. Test
NeurIPS ab. 512 181.73 2,448 409 402
AAN ab. 1,030 134.95 8,569 962 2,626
NSF ab. 2,923 263.62 96,017 10,185 21,573
SIND captions 288 58.61 40,155 4,990 5,055
ROCStory 100 52.84 78,529 9,816 9,816
Table 4: The statistics of all datasets. Max. and Avg. stands for the maximum and average number of tokens in documents.

Appendix B Statistics of Datasets

The statistics of all datasets are shown in Table 4.

Appendix C Implement Details

Our model is implemented by PyTorch 

Paszke et al. (2019) and HuggingFace’s Transformers Wolf et al. (2019). We train it on a TITAN V GPU with 12GB memory. We test {1,2,3} Transformer layers for the document encoder and choose two layers due to its best performance on validation set. The hidden size of the decoder is 200. AdamW optimizer Loshchilov and Hutter (2019)

is applied for training. The learning rate is 5e-5 for sentence encoder (BERT) and document encoder (multi-layer transformers), and 5e-3 for MLPs. The batch size is 32. The model is trained with 5 epochs and 20% of all training steps are used for learning rate warming up. The margin hyperparameter

is tuned in {0.25, 0.5, 0.75, 1} and set as 1. All models are selected according to their performance (i.e., the sum of score and PMR score) on validation set.