Human acceptability judgements for extractive sentence compression

02/01/2019 ∙ by Abram Handler, et al. ∙ University of Massachusetts Amherst 0

Recent approaches to English-language sentence compression rely on parallel corpora consisting of sentence-compression pairs. However, a sentence may be shortened in many different ways, which each might be suited to the needs of a particular application. Therefore, in this work, we collect and model crowdsourced judgements of the acceptability of many possible sentence shortenings. We then show how a model of such judgements can be used to support a flexible approach to the compression task. We release our model and dataset for future work.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In natural language processing,

sentence compression refers to the task of automatically shortening a longer sentence Knight and Marcu (2000); Clarke and Lapata (2008); Filippova et al. (2015). Traditional approaches attempt to create compressions which maintain readability, retain the most important information from the source sentence and achieve some fixed or flexible rate of compression Napoles et al. (2011).

However, in practice, applications which make use of sentence compression techniques will require dramatically different strategies on how to best achieve these three competing goals: a journalist might want abbreviated quotes from politicians, while a bus commuter might want movie review snippets on their mobile phone. Each such application presents different readability requirements, different definitions of importance and different brevity constraints.

Thus, while recent research into extractive sentence compression often assumes a single, best shortening of a sentence Napoles et al. (2011), in this work we argue that any compression which matches the readability, informativeness and brevity requirements of a given application is a plausible shortening. Practical compression methods will need to identify the best possible shortening for a given application, not just recover the “gold standard” compression.

Sentence. Pakistan launched a search for its missing ambassador to Afghanistan on Tuesday, a day after he disappeared in a Taliban area.
Headline. Pakistan searches for missing ambassador.
“Gold” compression. Pakistan launched a search for its missing ambassador.
Alternate 1. Pakistan launched a search for its missing ambassador to Afghanistan on Tuesday. ( = -1.367, Brevity = 84 characters max., Importance = 1)
Alternate 2. Pakistan launched search Tuesday. ( = -6.144, Brevity = 59 characters max., Importance = 0)
Table 1: A sentence, headline and “gold compression” from a standard sentence compression dataset Filippova and Altun (2013), along with two alternate compressions constructed with a system supervised with human acceptability judgements (§6). The alternate compressions reflect different Brevity requirements and different adherence to an application-specific Importance criterion. In this case, the brevity requirement is expressed with a hard maximum character constraint and the importance criterion is expressed with a binary score, indicating if a sentence includes a query term “Afghanistan”. We use our metric (§6) to measure the Acceptability of each compression; a higher score indicates a compression is more likely to be well-formed. Alternate 2 is neither entirely garbled nor perfectly well-formed, reflecting the gradient-based nature of acceptability (§4).

Traditional supervision for the compression task does not offer an obvious method for achieving this application-specific objective. Standard supervision consists of individual sentences paired with “gold standard” compressions Filippova and Altun (2013): offering just one reasonable shortening for each sentence in the corpus, from among the many plausible compressions. Table 1 demonstrates this point in detail. Therefore, in this work, we:

  • Collect and release111http://slanglab.cs.umass.edu/compression a large, crowdsourced dataset of human acceptability judgements Sprouse (2011), specifically tailored for the sentence compression task (§4). Acceptability judgments are native speakers’ self-reported perceptions of the well-formedness of a sentence Sprouse and Schütze (2014).

  • Present and evaluate a model of these acceptability judgements (§5).

  • Use this model to define an Acceptability function which predicts the well-formedness of a shortening (§6), for use in application-specific compression systems.

2 Related work

Researchers have been studying extractive sentence compression for nearly two decades Knight and Marcu (2000); Clarke and Lapata (2008); Filippova et al. (2015). Recent approaches are often based on a large compression corpus,222https://github.com/google-research-datasets/sentence-compression which was automatically constructed by using news headlines to identify “gold standard” shortenings Filippova and Altun (2013). State-of-the-art models trained on this dataset Filippova et al. (2015); Andor et al. (2016); Wang et al. (2017) can reproduce gold compressions (i.e. perfect token-for-token match) with accuracy higher than 30%.

However, because a sentence may be compressed in many ways (Table 1), this work introduces human acceptability judgements as a new and more flexible form of supervision for the sentence compression task. Our approach is thus closely connected to research which seeks to model human judgements of the well-formedness of a sentence Heilman et al. (2014); Sprouse and Schütze (2014); Lau et al. (2017); Warstadt et al. (2018). Unlike such studies, our work is strictly concerned with human perceptions of shortened sentences.333We compare our model to Warstadt et al. (2018) in §5. Our work also solicits human judgements of shortenings from naturally-occurring news text, instead of sentences drawn from syntax textbooks Sprouse and Almeida (2017); Warstadt et al. (2018) or created via automatic translation Lau et al. (2017).

We note that our effort focuses strictly on anticipating the well-formedness of extractive compressions, rather than identifying compressions which contradict or distort the meaning of the original sentence. Identifying which compressions do not modify the meaning of source sentences is closely connected to the unsolved textual entailment problem, a recent area of focus in computational semantics Bowman et al. (2015); Pavlick and Callison-Burch (2016); McCoy and Linzen (2018). In the future, we hope to apply this evolving research to the compression task. Some current compression methods use simple hand-written rules to guard against changes in meaning Clarke and Lapata (2008), or syntactic mistakes Jing and McKeown (2000).

Finally, following much prior work, this study approaches sentence compression as a purely extractive task. Closely related work on abstractive compression Cohn and Lapata (2008); Rush et al. (2015); Mallinson et al. (2018) and sentence simplification Zhu et al. (2010); Xu et al. (2015)

seeks to shorten sentences via paraphrases or reordering of words. Despite superficial similarity, extractive methods typically use different datasets, different evaluation metrics and different modeling techniques.

3 Compression via subtree deletion

Any sentence compression technique requires a framework for generating possible shortenings of a sentence, . We generate compressions with a subtree deletion approach, based on prior work Knight and Marcu (2000); Filippova and Strube (2008); Filippova and Altun (2013). To generate a single compression (from among all possible compressions) we begin with a dependency parse of .444We use Universal Dependency Nivre et al. (2016) trees (v1), parsed using CoreNLP Manning et al. (2014). Then, at each timestep, we prune a single subtree from the parse. After subtrees are removed from the parse (one at a time, over timesteps) the remaining vertexes are linearized in their original order. Formally pruning a subtree refers to removing a vertex and all of its descendants from a dependency tree; pruning singleton subtrees (one vertex) is permitted.

We find that is possible to construct 88.2% of the gold compressions in the training set of a standard corpus Filippova and Altun (2013) by only pruning subtrees. Therefore, we only examine prune-based compression in this work.555Extracting nested subclauses from source sentences is not possible with prune-only methods, because the root node of the compression must be the same as the root node of the original sentence. We plan to address this in future work.

4 Methodology and Dataset

Our data collection methodology follows extensive research into human judgements of linguistic well-formedness Sprouse and Schütze (2014). Such work has shown that non-expert participants offer consistent judgements of natural and unnatural sounding sentences with high test-retest reliability Langsford et al. (2018), across different data collection techniques Bader and Häussler (2010); Sprouse and Almeida (2017). We apply this research to sentence compression, with confidence that our results reflect genuine human perceptions of shortened sentences because:

  1. We carefully screen out many workers who offer judgements that violate well-known properties of English syntax, such as workers who approve deletion of objects from obligatory transitive verbs.

  2. We observe that workers approve and disapprove of classes of sentences that English speakers would categorize as “grammatical” and “ungrammatical,” respectively. For instance, workers rarely endorse deletion of nominal subjects, and often endorse deletion of temporal modifiers.

  3. Annotator agreement in our dataset is similar to agreement in prior, comprehensive studies of acceptability judgments (§4.4).

The appendix details our screening procedures, and discusses classes of accepted and rejected compressions in our dataset.

4.1 Measuring well-formedness

Our study adopts a standard distinction between acceptability and grammaticality Chomsky (1965); Schütze (1996). Grammaticality is a binary and theoretical notion, used to characterize whether a sentence is or is not generable under a grammatical model. Acceptability is a measurement an individual’s perception of the well-formedness of a sentence. Empirical studies have shown acceptability to be a gradient-based phenomenon, affected by a range of factors including plausibility, syntactic well-formedness, and frequency Sprouse and Schütze (2014). Based on this work, we expect that workers will have graded (not binary) perceptions of the well-formedness of compressions shown in our task.

Although acceptability is gradient-based, we nevertheless measure worker perceptions by collecting binary judgements of well-formedness. Earlier studies Bader and Häussler (2010); Sprouse and Almeida (2017); Langsford et al. (2018) have shown that such binary measurements of acceptability correlate strongly with explicitly gradient collection techniques, such as Likert scales (Figure 1). We chose to collect binary judgements instead of graded judgments because (1) binary judgments avoid ambiguity in how participants interpret a gradient scale (2) binary judgements allowed us to write clear screener questions to block unreliable crowdworkers from the task and (3) binary judgements allowed us to apply binary logistic modeling to directly predict observable worker behavior.

Figure 1: Binary judgements and graded (Likert) judgements from Sprouse and Almeida (2017) for the slightly-awkward sentence, “They suspected and we believed Peter would visit the hospital”. Bader and Häussler (2010) describe correlations between such measurement techniques.

4.2 Data collection prompt

We show crowdworkers on Figure Eight666https://www.figure-eight.com/. a naturally-occurring sentence, along with a proposed compression of that same sentence, generated by executing a single prune operation on the sentence’s dependency tree. We then ask a binary question: can the longer sentence be shortened to the shorter sentence? Our prompt is shown in Figure 2. We instruct workers to say yes if the shorter sentence “sounds good” or “sounds like something a person would say,” following verbiage for the acceptability task Sprouse and Schütze (2014).

Because we designed our task to follow typical acceptability prompts, we expect that workers completing the task evaluated the wellformedness of each compressed sentence, and then answered yes and if they deemed it acceptable.

We instruct workers to say yes if a compression sounds good, even if it changes the meaning of a sentence. While practitioners will need to identify shortenings which are both syntactically well-formed and which do not modify the meaning of a sentence, this work focuses strictly on identifying well-formed compressions. In the future, we plan to apply active research in semantics (§2) to identify disqualifying changes in meaning.

Figure 2: Prompt to collect human judgements of acceptability for sentence compression. Workers are instructed to answer yes if the shorter sentence “sounds good” or “sounds like something a person would say.”

4.3 Dataset details

We generate 10,128 sentence–compression pairs from a freely-distributable corpus of web news Zheleva (2017). Each source sentence is chosen at random, and each compression is produced by a single, randomly-chosen prune operation on . Our data thus reflects the natural distribution of dependency types in the corpus.

judgements (train) 6010 (4522 sents.)
judgements (test) 640 (486 sents.)
class balance 64.2%/35.8% (no/yes)
overall compression rate =0.867 =0.174
Table 2: Corpus statistics.

We present each pair to 3 or more workers,777FigureEight will sometimes solicit additional judgements automatically. then conservatively exclude many judgements from workers who are revealed to be inattentive or careless (see appendix), in order to be certain that worker disagreements in our dataset reflect genuine perceptions of well-formedness. We then divide filtered data into a training and test set by pair, so that our model does not use a train-time judgement about from worker to predict a test-time judgement about from worker . Table 2 presents dataset statistics.888Note that we use a character-based Filippova et al. (2015) rather than token-based Napoles et al. (2011) definition of compression rate. Our organization does not require institutional approval for crowdsourcing.

4.4 Inter-annotator agreement

There are at least two sources of inter-annotator disagreement which could arise in our data. First, in cases when a compression is neither entirely garbled nor perfectly well formed, previous empirical studies Sprouse and Almeida (2017); Langsford et al. (2018) suggest that annotators will likely disagree. (See Figure 1 and §4.1). Second, we suspect that different individuals set different thresholds on how acceptable a sentence must be before they give a “yes” response: a compression that one person rates as acceptable might be rated by the next as unacceptable, even if they have the same impression of the compression’s acceptability. Such between-annotator variability might represent a form of response bias, which is common in psychological experiments Macmillan and Creelman (1990, 2004). We attempt to control for such bias by including a worker ID feature in our model (§5.1).

To evaluate the extent of such disagreement, and to compare with other work, we measure inter-annotator agreement using Fleiss’ kappa Fleiss (1971), computing a on the entire filtered dataset.999See appendix: details of Fleiss for crowdsourcing. We observe a similar rate of agreement () in a comprehensive study of acceptability judgements Sprouse and Almeida (2017).101010We compute this number using publicly released data from the YN study. Our is lower than typical in standard annotation paradigms in NLP, which often attempt to assign instances to hard classes, rather than measure graded phenomena.

5 Intrinsic Task: Modeling single operations

We create a model to predict if a given worker will judge that a given single-operation compression is acceptable. We say that if worker answers that can be shortened to . We then model

using binary logistic regression, where

is a feature vector reflecting the nature of the edit which produces

from , as observed by worker .

5.1 Model Features

The major features in our model are: language model features, dependency type features, worker ID features, features reflecting properties of the edit from to and interaction features. We discuss each below.

Language model features. Our model builds upon earlier work examining the relationship between language modeling and acceptability judgements. In a prior study, lau2017grammaticality define several functions which normalize predictions from a language model by token length and by word choice; then test which functions best align with human acceptability judgements. We use their Norm LP function in our model, defined as:

(1)

where is a sentence,

is the probability of

given by a language model and is the unigram probability of the words in .

We use Norm LP as a part of two features in our approach. One real-valued feature records the probability of a compression computed by Norm LP(). Another binary feature computes Norm LP() - Norm LP() 0. The test set performance of these language model (LM) features is shown in Table 3. The appendix further describes our implementation of Norm LP.

Dependency type features. We use the dependency type governing the subtree pruned from to predict the acceptability of . This is because workers are more likely to endorse deletion of certain dependency types. For instance, workers will often endorse deletion of temporal modifiers, and often reject deletion of nominal subjects.

Worker ID features. We also include a feature indicating which of the workers in our study submitted a particular judgement for a given pair. We include this feature because we observe that different workers have greater or lesser tolerance for more and less awkward compressions.111111

More formally, we define a given worker’s deletion endorsement rate as the number of times a worker answers yes, divided by their total judgements. We observe a roughly normal distribution

of worker deletion endorsement rates across the dataset.

Including the worker ID feature allows our model to partially account for an individual worker’s judgement based on their overall endorsement threshold, and partially account for a worker’s judgement based on the linguistic properties of the edit. The feature thus controls for variability in each worker’s baseline propensity to answer yes. Because real applications will not have access to worker-specific information, we do not use the worker ID feature in evaluating our model and dataset for use in practical compression systems (§6). All workers in the test set submit judgements in the training set.

Hard classification (=0.5) Ranking
Model Accuracy Fleiss ROC AUC
CoLA 0.622 -0.210 0.590
language model (LM) 0.623 -0.232 0.583
+ dependencies 0.664 0.124 0.646
+ worker ID 0.695 0.232 0.746
full 0.742 0.400 0.807
- dependencies 0.731 0.368 0.797
- worker ID 0.667 0.170 0.691
worker – worker agreement 0.636 0.270
Sprouse and Almeida (2017) 0.323
Table 3: Test set accuracy, Fleiss’ and ROC AUC scores for six models, trained on the single-prune dataset (§4), as well as scores for a model trained on the CoLA dataset Warstadt et al. (2018). The simplest model uses only language modeling (LM) features. We add dependency type (+ dependencies) and worker ID (+ worker IDs) information to this simple model. We also remove dependency information (- dependencies) and worker information (- worker ID) from the full model. The full model achieves the highest test set AUC; values beside each smaller AUC score show the probability that the full model’s gains over the smaller AUC score occurs by chance. We also compute for each model by calculating the observed and pairwise agreement rates Fleiss (1971) for judgements submitted by the crowdworker and “judgements” submitted by the model. Models which can account for worker effects achieve higher accuracies than the observed agreement rate among workers (), leading to higher than for worker–worker pairs.

Edit property features. We also include several features which register properties of an edit, such as features which indicate if an operation removes tokens from the start of the sentence, removes tokens from the end of a sentence, or removes tokens which follow a punctuation mark. We include a feature that indicates if a given operation breaks a collocation (e.g. “He hit a home run”). The appendix details our collocation-detection technique.

Interaction features. Finally, we include seventeen interaction features formed by crossing exit property features with particular dependency types. For instance, we include a feature which records if a prune of a    removes a token following a punctuation mark.

5.2 Evaluation

We compare our model of individual worker judgements to simpler approaches which use fewer features (Table 3), including an approach which uses only language model information Lau et al. (2017) to predict acceptability. We compute the test set accuracy of each approach in predicting binary judgements from individual workers, which allows for comparison with agreement rates between workers. However, because acceptability is a gradient-based phenomenon (§4

), we also evaluate without an explicit decision threshold via the area under the receiver operating characteristic curve (ROC AUC), which measures the extent to which an approach ranks good deletions over bad deletions. ROC AUC thus measures how well predicted probabilities of binary positive judgements correlate with human perceptions of well-formedness. Other work which solicits gradient-based judgements instead of binary judgements evaluates with Pearson correlation

Lau et al. (2017); ROC AUC is a close variant of the Kendall ranking correlation (Newson, 2002). Our full model also achieves a higher AUC than approaches that remove features from the model. We use bootstrap sampling Berg-Kirkpatrick et al. (2012) to test the significance of AUC gains (Table 3); values reflect the probability that the difference in AUC between the full model and the simpler model occurs by chance.

The probability that two workers, chosen at random, will agree that a given in the test set may be shortened to a given in the test set is 63.6%. We hypothesize that the full model’s accuracy of 74.2% is higher than the observed agreement rate between workers because the full model is better able to predict if worker will endorse an individual deletion.

We also compare our full model to a baseline neural acceptability predictor trained on CoLA Warstadt et al. (2018): a corpus of grammatical and ungrammatical sentences drawn from syntax textbooks. Using a pretrained model, we predict the probability that each source sentence and each compression is well formed, denoted CoLA and CoLA. We use these predictions to define four features: CoLA, log CoLA, CoLA - CoLA, and log CoLA - log CoLA. We show the performance of this model in Table 3. CoLA’s performance for extractive compression results warrants future examination: large corpora designed for neural methods sometimes contain limitations which are not initially understood Chen et al. (2016); Jia and Liang (2017); Gururangan et al. (2018).

6 Extrinsic Task: Modeling multi-operation compression

In this work, we argue that a single sentence may be compressed in many ways; a “gold standard” compression is just one of the exponential possible shortenings of a sentence. Instead of mimicking gold standard training data, we argue that compression systems should support application-specific Brevity constraints and Importance criteria: stock traders and stylists will have different definitions of “important” content in fashion blogs, compressions for a desktop will be longer than compressions for a phone.

Nevertheless, in many application settings, compression systems will try to show users well-formed shortenings. Thus, in the remainder of this study, we examine how our transition-based sentence compressor (§3), supervised with human acceptability judgements (§5), may be be used to provide Acceptability scores which align with human perceptions of well-formedness.121212Early approaches to sentence compression used language models Clarke and Lapata (2008) or corpus statistics Filippova and Strube (2008) to generate “readability” scores. Newer neural approaches Filippova et al. (2015) rely on implicit definitions of well-formedness encoded in training data. Such scores could be used as a component of many different practical sentence compression systems, including a method described in §6.3.

6.1 Acceptability scores

We consider any function which maps a compression to some real number reflecting its well-formedness to be an Acceptability score. In §5, we present a model, , which attempts to predict which operations on well-formed sentences return reasonable compressions. If we execute a chain of such operations, and assume that each operation’s effect on acceptability is independent, we can model the probability that prune operations will result in an acceptable compression with , which is equal to the chance that a person will endorse each of the deletions. We test this model with a function that expresses the probability that all operations are acceptable:

(2)

where each are features reflecting the nature of the prune operations which shortens to in the chain of operations, and where is the predicted probability of deletion endorsement under our full model. (Because no worker observes the deletion, we do not use the worker ID feature in predicting deletion endorsement.)

The sum of log probabilities in reflects the fact that any operation on a well-formed sentence carries inherent risk: modifying a sentence’s dependency tree may result in a compression which is not acceptable. The more operations executed the greater the chance of generating a garbled compression. We use this intuition to define a simpler alternative, , where is the number of prune operations used to create the compression. We also examine a function , which represents our observation that a single operation with a low chance of endorsement will often create a garbled compression. We compare these functions to , which is equal to the probability of the compression under a language model, normalized by sentence length. (We use the Norm LP formula defined in §5. Language model predictions have been shown to correlate with the well-formedness of a sentence Lau et al. (2017); Kann et al. (2018).) Finally, we test a function , which is equal to the predicted probability of well-formedness of the compression from a pretrained acceptability predictor (§5).

6.2 Experiment

To evaluate each Acceptability function, we collect a small dataset of multi-prune compressions.131313Rather than single-prune compressions (§4). We draw 1000 initial candidate sentences from a standard compression corpus Filippova and Altun (2013), and then remove sentences which are shorter than 100 characters to create a sample of 958 sentences. We then compress each of the sentences in the sample by executing prune operations in succession until the character length of the remaining sentence is less than

, a randomly sampled integer between 50 and 100. This creates an evaluation dataset with a (roughly) uniform distribution, by character length.

To generate each compression, we use a sampling method which allows us to explore a wide range of well-formed and garbled shortenings, without generating too many obviously terrible compressions.141414Very many exponential possible compressions of a single sentence will be garbled or non-sensical. Pruning even a single subtree at random from an acceptable sentence (§5) destroys acceptability more than 60% of the time. Concretely, we sample each prune operation in each chain in proportion to , a model’s prediction (§5) that the edit will be judged acceptable. This means that we delete vertex and its descendants with probability , where is the compression formed by pruning the subtree rooted at and is the sum of the endorsement probability of all possible compressions.151515In the behavioral sciences, this method of choosing actions is sometimes called probability matching Vulkan (2000).

We show each sentence in the evaluation dataset to 3 annotators, using our acceptability prompt (Figure 2). This creates final evaluation set consisting of 2,388 judgements of 940 multi-prune compressions, after we implement the judgement filtering process described in the appendix. We compute =0.099 for the evaluation dataset.

For all defined Acceptability functions, we measure AUC against binary worker deletion endorsements (yes or no judgements) in the evaluation dataset to determine the quality of the ranking produced by each Acceptability function (§5.2). The function , which integrates information from a language model, as well as information about the grammatical details of the process which creates from , achieves the highest AUC on the evaluation set, best correlating with human judgements of well-formedness.

Function Description ROC AUC
CoLA pretrained .510
Language model .557
Number of operations -1 .580
Least acceptable operation .581
All operations acceptable .591
Table 4: ROC AUC for several Acceptability functions for the multi-operation compression task. The model achieves a gain of .034 in AUC over the model .

6.3 One sentence, many compressions

This work argues that there is no single best way to compress a sentence. We demonstrate this idea by examining some of the exponential possible compressions of the sentence shown below. (This same sentence is also shown in Table 1.)

= Pakistan launched a search for its missing ambassador to Afghanistan on Tuesday, a day after he disappeared in a Taliban area.
= Pakistan launched a search for its missing ambassador
= Pakistan launched a search for its missing ambassador to Afghanistan on Tuesday
= Pakistan launched search Tuesday
Table 5: A sentence and a “gold” compression () from a standard corpus Filippova and Altun (2013), along with two alternate compressions ( and ).

We generate an initial list of 1000 possible compressions of this 126-character sentence via the procedure defined in §6.2, and we score the Acceptability of each compression by . In this instance, we define Brevity to be the maximum character length of a compression and Importance to be a binary function returning 1 only if the compression includes the query term, “Afghanistan”. (Practical compression systems would also need to check for changes in meaning resulting from deletion (§2), but we leave this step for future work.)

Figure 3: 554 possible compressions of a single sentence (Table 5), displayed by score, Brevity constraint and Importance criterion. The “gold standard” compression, , is shown with a large red square, along with alternate (large triangle), and alternate (large circle).

Following deduplication steps described in the appendix, we generate a final list of 554 different possible compressions of the sentence. We plot each of the 554 compressions in Figure 3, which shows many possible shortenings with high scores. The “gold standard” compression is just one arbitrary shortening of a sentence.

7 Conclusion and future work

Our effort suggets areas for future work. To begin, our study is strictly concerned with grammatical and not semantic acceptability. In the future, we hope to apply active work from semantics (§2) to identify meaning-changing compressions. We also plan to add support for additional operations such as paraphrasing or forming compressions from nested subclauses. In addition, we plan to develop improved models of multi-prune compression, and to apply our Acceptability

scores in loss functions for neural compression techniques.

8 Appendix

8.1 Crowdsourcing details

We paid workers 5 cents per judgement and only opened our task to US-based workers with a level-2 designation on Figure Eight.

Following standard practice on the crowdsourcing platform, we used test questions to screen out careless workers Snow et al. (2008). All workers began our job with a quiz mode of 10 screener questions, and then saw one screener question in every subsequent page of 10 judgements. Workers who failed more than 80% of test questions were screened out from the task. Screener questions were indistinguishable from our regular collection prompt.

We wrote test screener questions based on established understanding of English syntax, to avoid biasing results with our own subjective judgements of acceptability. For example, linguists have extensively examined which English verbs require objects and which verbs do not require objects via corpus-based, elicitation-based and eye-tracking methods Gahl et al. (2004); Staub et al. (2006). We used this work to write screener questions which check that workers answer no for operations that prune direct objects of obligatory transitive verbs. Similarly, we wrote screener questions which check that workers answer no to deletions which split a verb and a known obligatory particle in a multiword expression Baldwin and Kim (2010), or remove determiners before singular count nouns (Huddleston and Pullum, 2002, p. 354). For the multi-prune dataset (§6.2), we added test questions which confirmed that workers approved of well-formed, gold standard compressions from a standard corpus Filippova and Altun (2013).

We also include screener questions which check if a worker is paying attention, along with several poll questions which ask workers if they grew up speaking English.161616Workers are instructed there is no right answer to questions about language background, so there is no incentive to answer dishonestly. We ignore judgements from known non-native speakers and known inattentive workers in downstream analysis.171717We also exclude 1663 suspected fraudulent judgements from 17 IP address associated with multiple worker IDs. We defined rules for filtering the dataset before examining the test set to ensure that filtering decisions did not influence test-set results. We release screener questions and task instructions along with crowdsourced data for this work.

8.2 Per-dependency deletion endorsements

Breaking out worker responses by dependency type provides additional validation for our data collection approach. We observe that workers are unlikely to endorse deletion of dependency types which create compressions that English speakers would deem “ungrammatical,” and likely to endorse deletions which speakers would deem “grammatical.”

For example, in UD, the    relation is most commonly used to link two or more function words that obligatorily occur together (e.g. because of, due to, as well as). Since deleting a    amounts to suppressing a critical closed-class item, it is not surprising that, overall, workers only assented to deleting    in 9.5% of cases. Similarly, low deletion endorsement for the    relation (15.6%) is consistent with the grammatical rules of mainstream varieties of American English, which generally require an overt copula in copular constructions.181818Not all dialects require overt copulas Green (2002).

On the other hand, we found that optional Larson (1985) pre-conjunction operators like both or either were almost always considered removable (80.0% deletion endorsement). Workers also endorsed the deletion of temporal adverbs such as tomorrow or the day after next 78.9% of the time, which is sensible as temporal adverbs are typically considered adjuncts.

Since these response patterns generally align with well-established grammatical generalizations Huddleston and Pullum (2002), they serve to validate our data collection approach.

8.3 Experimental details

We report additional details regarding several of the experiments in the paper, presented in the order in which experiments appear.

Fleiss . Fleiss’ original metric Fleiss (1971) assumes that each judged item will be judged by exactly the same number of raters. However, our data filtering procedures create a dataset with a variable number of raters per sentence-compression pair. (This is common in crowdsourcing.) We thus calculate the observed agreement rate for an individual item (, in Fleiss’ notation) by computing the pairwise agreement rate from among all raters for that item. We ignore cases where only one rater judged a given pair, which occurs for 73.2% of pairs.

Tuning and implementation. We implement our model with scikit-learn Pedregosa et al. (2011) using L2 regularization. We tune the inverse regularization constant to to optimize ROC AUC in 5-fold cross validation over the training set, after testing . We do not include a bias term. All other settings are set to default values.

NormLP. Following Lau et al. (2017), in this work, we use the Norm LP function to normalize output from a language model to predict grammaticality. Our Norm LP function uses predictions from a 3-gram language model trained on English Gigaword Parker et al. (2011) and implemented with KenLM Heafield (2011). lau2017grammaticality report identical performance for the Norm LP function using 3-gram and 4-gram models.

Lau et al. (2017) found that another function, SLOR, performed as well as Norm LP in predicting human judgements. We found that Norm LP achieved higher AUC than SLOR in 5-fold cross-validation experiments with the training set.191919Kann et al. (2018) also examine SLOR for automatic fluency evaluation.

Collocations. Our model includes a binary feature, indicating if an edit breaks a collocation. We identify collocations by computing the offsets (signed token distances) between words (Manning and Hinrich Schütze, 1999, ch. 5.2) in English Gigaword Parker et al. (2011)

. If the variance in token distance between two words is less than 2 and the mean token distance between the words is less than 1.5 we deem the words a collocation. We identify 647 total edits (across train and test sets) which break a collocation; only 11 of such edits are for  

  relations. Examples include: “forget about it”’, “kind of” and “as well”.

CoLA. All reported results for the CoLA model use the Real/Fake + LM (ELMo) baseline from Warstadt et al. (2018).202020https://github.com/nyu-mll/CoLA-baselines Across our entire dataset, the mean predicted acceptability of source sentences from the CoLA model is 0.867 (=0.264) and the predicted acceptability of compressions is 0.740 ( = 0.363). We hypothesize that compression scores have a greater variance and a lower mean because only some compressions are well-formed.

Deduplication of possible compressions. In this work, we describe a method for generating multiple compressions from a single sentence. In our generation procedure, it is possible to randomly select the exact same sequence of operations multiple times. During these experiments, we remove any such duplicates from the initial list.

Additionally, in our compression framework, the sequence of operations which produces a given shortening is not unique.212121For instance, it is possible to prune a leaf vertex with one operation, and then prune its parent vertex with a second operation; or just remove both vertexes at once via a single prune of the parent. (Each sequence returns the same compression.) In cases where different sequences of operations return the same compression, we select the compression with the highest score, which represents the best available path to the shortening.

References

  • Andor et al. (2016) Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In ACL.
  • Bader and Häussler (2010) Markus Bader and Jana Häussler. 2010. Toward a model of grammaticality judgments. Journal of Linguistics, 46(2):273–330.
  • Baldwin and Kim (2010) Timothy Baldwin and Su Nam Kim. 2010. Multiword expressions. In Handbook of Natural Language Processing.
  • Berg-Kirkpatrick et al. (2012) Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning.
  • Bowman et al. (2015) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP.
  • Chen et al. (2016) Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In ACL.
  • Chomsky (1965) Noam Chomsky. 1965. Aspects of the theory of syntax. M.I.T. Press, Cambridge.
  • Clarke and Lapata (2008) James Clarke and Mirella Lapata. 2008.

    Global inference for sentence compression: An integer linear programming approach.

    Journal of Artificial Intelligence Research

    , 31:399–429.
  • Cohn and Lapata (2008) Trevor Cohn and Mirella Lapata. 2008. Sentence compression beyond word deletion. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1.
  • Filippova et al. (2015) Katja Filippova, Enrique Alfonseca, Carlos A Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with LSTMs. In EMNLP.
  • Filippova and Altun (2013) Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In EMNLP.
  • Filippova and Strube (2008) Katja Filippova and Michael Strube. 2008. Dependency tree based sentence compression. In

    Proceedings of the Fifth International Natural Language Generation Conference

    .
  • Fleiss (1971) Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.
  • Gahl et al. (2004) Susanne Gahl, Dan Jurafsky, and Douglas Roland. 2004. Verb subcategorization frequencies: American english corpus data, methodological studies, and cross-corpus comparisons. Behavior Research Methods, Instruments, & Computers, 36(3):432–443.
  • Green (2002) Lisa Green. 2002. African American English : A linguistic introduction. Cambridge University Press, Cambridge, U.K. New York.
  • Gururangan et al. (2018) Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In NAACL.
  • Heafield (2011) Kenneth Heafield. 2011. KenLM: faster and smaller language model queries. In EMNLP Sixth Workshop on Statistical Machine Translation, Edinburgh, Scotland, United Kingdom.
  • Heilman et al. (2014) Michael Heilman, Aoife Cahill, Nitin Madnani, Melissa Lopez, Matthew Mulholland, and Joel Tetreault. 2014. Predicting grammaticality on an ordinal scale. In ACL.
  • Huddleston and Pullum (2002) Rodney Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press.
  • Jia and Liang (2017) Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In EMNLP, Copenhagen, Denmark.
  • Jing and McKeown (2000) Hongyan Jing and Kathleen R McKeown. 2000.

    Cut and paste based text summarization.

    In NAACL.
  • Kann et al. (2018) Katharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! In CoNNL 2018.
  • Knight and Marcu (2000) Kevin Knight and Daniel Marcu. 2000. Statistics-based summarization - step one: Sentence compression. In AAAI/IAAI.
  • Langsford et al. (2018) Steven Langsford, Amy Perfors, Andrew T Hendrickson, Lauren A Kennedy, and Danielle J Navarro. 2018. Quantifying sentence acceptability measures: Reliability, bias, and variability. Glossa: a journal of general linguistics, 3(1).
  • Larson (1985) Richard K Larson. 1985. On the syntax of disjunction scope. Natural Language & Linguistic Theory, 3(2):217–264.
  • Lau et al. (2017) Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cognitive Science, 41(5):1202–1241.
  • Macmillan and Creelman (1990) Neil A Macmillan and C Douglas Creelman. 1990. Response bias: Characteristics of detection theory, threshold theory, and "nonparametric" indexes. Psychological Bulletin, 107(3):401.
  • Macmillan and Creelman (2004) Neil A. Macmillan and C. Douglas Creelman. 2004. Detection theory: A user’s guide. Psychology Press.
  • Mallinson et al. (2018) Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2018. Sentence Compression for Arbitrary Languages via Multilingual Pivoting. In EMNLP 2018.
  • Manning and Hinrich Schütze (1999) Christopher Manning and Carson Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, Mass.
  • Manning et al. (2014) Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations.
  • McCoy and Linzen (2018) Thomas R. McCoy and Tal Linzen. 2018. Non-entailed subsequences as a challenge for natural language inference. CoRR. Version 1.
  • Napoles et al. (2011) Courtney Napoles, Benjamin Van Durme, and Chris Callison-Burch. 2011. Evaluating sentence compression: Pitfalls and suggested remedies. In Proceedings of the Workshop on Monolingual Text-To-Text Generation.
  • Newson (2002) Roger Newson. 2002. Parameters behind “nonparametric” statistics: Kendall’s tau, Somers’ D and median differences. The Stata Journal, 2(1):45–64.
  • Nivre et al. (2016) Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan T. McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In LREC.
  • Parker et al. (2011) Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword fifth edition.
  • Pavlick and Callison-Burch (2016) Ellie Pavlick and Chris Callison-Burch. 2016. So-called non-subsective adjectives. In *SEM.
  • Pedregosa et al. (2011) Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12:2825–2830.
  • Rush et al. (2015) Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In EMNLP.
  • Schütze (1996) Carson Schütze. 1996. The empirical base of linguistics: Grammaticality judgments and linguistic methodology. University of Chicago Press, Chicago, Il.
  • Snow et al. (2008) Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast—but is it good?: Evaluating non-expert annotations for natural language tasks. In EMNLP.
  • Sprouse and Schütze (2014) John Sprouse and Carson Schütze. 2014. Research Methods in Linguistics, chapter Judgment Data. Cambridge University Press, Cambridge, UK.
  • Sprouse (2011) Jon Sprouse. 2011. A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory. Behavior research methods, 43(1):155–167.
  • Sprouse and Almeida (2017) Jon Sprouse and Diogo Almeida. 2017. Design sensitivity and statistical power in acceptability judgment experiments. Glossa, 2(1):1.
  • Staub et al. (2006) Adrian Staub, Charles Clifton, and Lyn Frazier. 2006. Heavy np shift is the parser’s last resort: Evidence from eye movements. Journal of memory and language, 54(3):389–406.
  • Vulkan (2000) Nir Vulkan. 2000. An economist’s perspective on probability matching. Journal of economic surveys, 14(1):101–118.
  • Wang et al. (2017) Liangguo Wang, Jing Jiang, Hai Leong Chieu, Chen Hui Ong, Dandan Song, and Lejian Liao. 2017. Can syntax help? Improving an LSTM-based sentence compression model for new domains. In ACL.
  • Warstadt et al. (2018) Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2018. Neural network acceptability judgments. CoRR, abs/1805.12471v1.
  • Xu et al. (2015) Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. TACL, 3:283–297.
  • Zheleva (2017) Elena Zheleva. 2017. Vox media dataset. In KDD DS + J Workshop, Halifax, Canda.
  • Zhu et al. (2010) Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In COLING.