Matching with Text Data: An Experimental Evaluation of Methods for Matching Documents and of Measuring Match Quality

01/02/2018 ∙ by Reagan Mozer, et al. ∙ 0

How should one perform matching in observational studies when the units are text documents? The lack of randomized assignment of documents into treatment and control groups may lead to systematic differences between groups on high-dimensional and latent features of text such as topical content and sentiment. Standard balance metrics, used to measure the quality of a matching method, fail in this setting. We decompose text matching methods into two parts: (1) a text representation, and (2) a distance metric, and present a framework for measuring the quality of text matches experimentally using human subjects. We consider 28 potential methods, and find that representing text as term vectors and matching on cosine distance significantly outperform alternative representations and distance metrics. We apply our chosen method to a substantive debate in the study of media bias using a novel data set of front page news articles from thirteen news sources. Media bias is composed of topic selection bias and presentation bias; using our matching method to control for topic selection, we find that both components contribute significantly to media bias, though some news sources rely on one component more than the other.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently, Roberts et al. (2018) introduced an approach for matching text documents in order to capture causal estimates of substantive and policy-relevant quantities of interest. Matching is a statistical tool primarily used to facilitate estimation of treatment effects from observational data in the presence of confounding covariates (Rubin, 1973b; Rosenbaum, 2002; Rubin, 2006; Stuart, 2010). The principles behind matching can also be used to create sharp, targeted comparisons of units in order to, for example, create more principled rankings of hospitals (Silber et al., 2014). The core idea of matching is to find sets of units from distinct populations that are in all ways similar, other than some specific aspects of interest; one can then compare these remaining aspects across the populations of interest to ascertain differences foundational to these populations. In short, matching provides a strategy for making precise comparisons and performing principled investigations in observational studies.

Though widely used in practice, matching is typically used in settings where both the covariates and outcomes are well-defined, low-dimensional quantities. Text is not such a setting. With text, standard contrasts of outcomes between groups may be distorted estimates of the contrasts of interest due to confounding by high-dimensional and possibly latent features of the text such as topical content or overall sentiment. How to best capture and adjust for these features is the core concern of this work. This article expands upon the work of Roberts et al. (2018) and explores best practices for the problem of matching documents within a corpus made up of distinct groups (e.g., a treatment and control group), where interest is in finding a collection of document pairs that are fundamentally “the same” along key dimensions of interest (in our first application, for example, we find newspaper articles that are about the same topics and stories). These matched documents can then be compared with respect to other aspects, either separate from the text, such as number of citations or online views, or features of the text itself, such as sentiment. In the case where the groups could be thought of as different versions of a treatment (e.g., documents that were censored vs. not, such as in Roberts et al., 2018), this allows for obtaining estimates of causal effects. In our first application, we use template matching (Silber et al., 2014) to compare news media organizations’ biases, beyond choices of which stories to cover, to engage with a running debate on partisan bias in the news media. Through template matching, we are able to identify a similar sample of news articles from each news source that allows for a more principled (though not necessarily causal) investigation of how news sources may differ systematically in terms of partisan favorability. We then illustrate the utility of text matching in a more traditional causal inference setting, namely, in an observational study evaluating the causal effects of a binary treatment. In our second application, we demonstrate how text data obtained from doctors notes can be used to improve the covariate balance achieved between treatment and control groups and to strengthen the key assumptions required to make valid causal inferences in non-randomized studies.

This paper makes three contributions to guide researchers interested in this domain. Our first contribution is a structured deconstruction and discussion of the elements that constitute text matching. This formulation identifies a series of choices a researcher can make when performing text matching and presents an approach for thinking about how matching can be used in studies where the covariates, the outcome of interest, or both are defined by summary measures of text. In particular, we describe how the challenges to matching that arise from the unstuctured and high-dimensional nature of text data can be addressed using two key steps: 1) imposing a finite, quantitative structure on the corpus (hereinafter referred to as a “representation”), and 2) constructing a measure of distance between documents based on this representation (“distance metric”). The text representation operationalizes unstructured text documents as covariate vectors and the distance metric captures the high-dimensional proximity of pairs of these covariate vectors. In this paper, we consider text representations constructed from -gram vectors, topic loadings estimated using statistical topic models such as the Latent Dirichlet Allocation (LDA) model (Blei et al., 2003) or the Structural Topic Model (STM) (Roberts et al., 2016)

, propensity scores, and neural network-derived word embeddings; we also consider distance metrics based on the Euclidean, Mahalanobis, and cosine distances, as well as exact and coarsened exact distances, exposing important theoretical and practical limitations associated with each of these choices.

With a representation and a distance metric in hand, one can perform matching using existing procedures such as nearest neighbor matching (Rubin, 1973a), propensity score matching (PSM, Rosenbaum and Rubin, 1983), pair matching, coarsened exact matching (CEM, Iacus et al., 2012), or full matching (Rosenbaum, 1991) to identify subsamples of text documents where the comparison groups are more balanced (i.e., similar on baseline measures) than in the original full sample. Although implementation of these procedures is relatively straightforward, the choice of a single representation and distance metric among the many available options requires a number of important design choices for the researcher. Our second contribution is to investigate these choices using a systematic multifactor human evaluation experiment to examine how different representations and distance metrics correspond to human judgment about document similarity. Our experiment explores the efficiency of each combination of choices for matching documents in order to identify the representations and distance metrics that dominate in terms of producing the largest number of matches for a given dataset without sacrificing match quality. Using the human ratings of match quality from our evaluation experiment as training data, we also develop a predictive model for estimating the quality of a given pair of matched documents based on machine measures of similarity. We then show this model can be used to evaluate the performance of entirely new matching procedures.

The results from our human evaluation experiment suggest that for our corpus and application in identifying closely related news articles, cosine distance, a distance metric not commonly used in the matching literature but widely used in the information retrieval literature, dominates other metrics for predicting human judgment of match quality, especially when used in conjunction with a raw term document matrix representation. Using these findings, our third contribution is twofold. First, we present a novel application of template matching (Silber et al., 2014) to the study of media bias, an increasingly timely debate. In particular, we decompose the partisan bias of news media organizations into topic selection bias (i.e., partisan bias toward the types of newspaper articles that are published) and lexical presentation bias (i.e., partisan sentiment and language used within the articles). By first obtaining a representative sample of newspaper articles across thirteen news sources, and then using cosine matching to obtain approximately equivalent collections of articles from each of the individual news sources, we identify systematic differences between news media organizations in the partisan sentiment used when presenting similar content, while controlling for differences in the types of content that these sources tend to cover. Next, we apply text matching to make more precise inferences about the causal effects of a medical intervention in an observational study. A strategy commonly used in this setting is to match treated and control patients using observed quantitative covariates; however, this strategy will be insufficient if there are any confounding variables that are not observed. We illustrate this point by considering a study where doctors notes may contain information about important confounding variables that are not captured by the quantitative covariates. We show that by matching patients both on observed covariates and features of the text, we can achieve better covariate balance between treatment and control groups and strengthen the assumption of no unmeasured confounding that is required to make valid causal claims about the treatment effect.

Our work builds on Roberts et al. (2018), the seminal paper in this literature, which introduces text matching, operationalizing the text data by using topic modeling coupled with propensity scores to generate a lower-dimensional representation of text to match on. They also present several applications that motivate the use of text matching to address confounding and describe several of the methodological challenges for matching that arise in these settings. Specifically Roberts et al. (2018)

discuss the limitations of direct propensity score matching and coarsened exact matching (CEM) on the raw text for matching with high dimensional data and introduce Topical Inverse Regression Matching (TIRM), which uses structural topic modeling (STM)

(Roberts et al., 2016) to generate a low-dimensional representation of a corpus and then applies CEM to generate matched samples of documents from distinct groups within the corpus. We generalize this work to develop a general framework for both constructing and evaluating text matching methods. This allows us to consider a number of alternative matching methods not considered in Roberts et al. (2018), each characterized by one representation of the corpus and one distance metric. Within this framework, we also present a systematic approach for comparing different matching methods through our evaluation experiment, which identifies methods that can produce more matches and/or matches of higher quality than those produced by TIRM. Overall, we clarify that there is a tradeoff between match quality and the number of matches, although many methods do not optimize either choice.

2 Background

2.1 Notation and problem setup

Consider a collection of text documents, indexed by , where each document contains a sequence of terms. These documents could be any of a number of forms such as news articles posted online, blog posts, or entire books, and each document in the dataset need not be of the same form. Together, these documents comprise a corpus, and the set of unique terms used across the corpus define the vocabulary. Each term in the vocabulary is typically a unique, lowercase, alphanumeric token (i.e., a word, number, or punctuation mark), though the exact specification of terms may depend on design decisions by the analyst (e.g., one may choose to include as terms in the vocabulary all bigrams observed in the corpus in addition to all observed unigrams). Because the vocabulary is not fixed, documents are generally regarded as “unstructured” data in the sense that their dimension is not well-defined. To address this issue, we impose structure on the text through a representation, , which maps each document to a finite, usually high-dimensional, quantitative space.

To make principled comparisons between groups of documents within the corpus, we borrow from the notation and principles of the Rubin Causal Model (RCM) (Holland, 1986). Under the RCM, each document has an indicator for treatment assignment (i.e., group membership), , which equals 1 for documents in the treatment group and 0 for documents in the control group. Interest focuses on estimating differences between these groups on an outcome variable, which takes value if document is in the treatment group and if document is in the control group. These outcomes may be separate from the text within each document (e.g., the number of times a document has been viewed online) or may be a feature of the text (e.g., the length of the document or level of positive sentiment within the document).111In the latter case, care must be taken to ensure the features of the representation used to define the covariates are suitably separated from features that define the potential outcomes. This issue is discussed further in Section 3 and in Appendix A.4. For credible and precise causal inference, it is desirable to compare treated and control documents that are as similar as possible. However, in observational studies, may not be randomly assigned, leading to systematic differences between treatment and control groups. Matching is a strategy that attempts to address this issue by identifying samples of treated and control documents that are comparable on covariates in order to approximate random assignment of (i.e., to satisfy ) (Rosenbaum, 2002; Rubin, 2006). Under the key assumption of “selection on observables,” which states that all covariates that affect both treatment assignment and potential outcomes are observed and captured within

, comparisons of outcomes between matched samples can be used to obtain unbiased estimates of the quantities of interest

(Rosenbaum, 2002).

Matching is usually conceived of as a method for pre-processing data in order to obtain causal estimates in an observational study context. The goal of performing matching in non-randomized studies is to identify a subset of the original data among which the treatment and control groups are similar enough to be considered “as-if” randomized. In studies with a clearly defined intervention, comparisons of these matched samples can then be used to make inferences about the causal effects of assignment to treatment versus control. For example, in our second application examining the effects of a medical intervention, matching allows us to identify a sample of treated and control units who are similar enough on pre-treatment variables such that any differences in outcomes between these groups can be plausibly attributed to the treatment. These tools can be used more broadly, however, to produce clearly defined comparisons of groups of units even when a particular intervention is not well-defined. For example, Silber et al. (2014) introduces template matching as a tool for comparing multiple hospitals that potentially serve different mixes of patients (e.g., some hospitals have a higher share of high-risk patients). The core idea is to compare like with like: by comparing hospitals along an effective “score card” of patients, we can see which hospitals are more effective, on average, given a canonical population. In general, we focus on this general conception of matching, recognizing that often in text there is no treatment that could, even in concept, be randomized. For example, a comparison of style between men and women could not easily be construed as a causal impact. Nevertheless, the framing and targeting of a controlled comparison inherent in a causal inference approach can still be useful in these contexts. This broader formulation of matching is used in our first application in Section 5 investigating different aspects of bias in newspaper media.

2.2 Promises and pitfalls of text matching

Matching methods generally consist of four steps: 1) define a measure of distance (or similarity) to determine whether one unit is a good match for another, 2) match units systematically across groups according to the chosen distance metric, 3) evaluate the quality of these matched samples in terms of their balance on observed covariates, possibly repeating the matching procedure until suitable balance is achieved, and 4) estimate treatment effects from these matched data (Stuart, 2010). Different choices at each step of this process produce an expansive range of possible configurations. For instance, there are distance metrics for scalar covariates (Rubin, 1973b), for multivariate covariates summarized through a univariate propensity score (Rosenbaum and Rubin, 1983, 1985), and multivariate metrics such as the Mahalanobis distance metric (Rubin, 1978; Gu and Rosenbaum, 1993).

Similarly, there is a large and diverse literature on matching procedures (Rosenbaum, 2002; Rubin, 2006), and the choice of procedure depends on both substantive and methodological concerns. Some procedures match each unit in the treatment group to its one “closest” control unit and discard all unused controls (e.g., one-to-one matching with replacement), while other procedures allow treated units to be matched to multiple controls (e.g., ratio matching; Smith, 1997) and/or matching without replacement (e.g., optimal matching; Rosenbaum, 1989). Match quality is often evaluated with a number of diagnostics that formalize the notion of covariate balance such as the standardized differences in means of each covariate (Rosenbaum and Rubin, 1985). Unfortunately, determinations of what constitutes “suitable” balance or match quality are often based on arbitrary criteria (Imai et al., 2008; Austin, 2009), and assessing whether a matching procedure has been successful can be quite difficult. That being said, once a suitable set of matches is obtained, one can then typically analyze the resulting matched data using classic methods appropriate for the type of data in hand. Stuart (2010) outlines a number of common analytical approaches.

The rich and high-dimensional nature of text data gives rise to a number of unique challenges for matching documents using the standard approach described above. From a causal inference perspective, in many text corpora there is going to be substantial lack of overlap, i.e., types of documents in one group that simply do not exist in other groups. This lack of overlap is exacerbated by the high-dimensional aspect of text: the more rich the representation of text, the harder it will be to find similar documents to a target document (D’Amour et al., 2017)

. This makes the many design decisions required to operationalize text for matching such as defining a distance metric and implementing a matching procedure especially challenging. Distance metrics must be defined over sparse, high-dimensional representations of text in a manner that captures the subtleties of language. If these representations are overly flexible, standard matching procedures can fail to identify good (or any) matches in this setting due to the curse of dimensionality.

Lack of overlap can come from substantive lack of overlap (the documents are inherently different) and aspects of the text representation that are not substantive (this is akin to overfitting the representation model). Under this view, all of the matching procedures discussed in this work can be thought of as carving out as many high quality matches as they can find, implicitly setting parts of the corpus aside to have good comparisons across groups. This is in effect isolating (Zubizarreta et al., 2014) a focused comparison within a larger context. In a causal context, this can shift the implied estimand of interest to only those units in the overlap region. For further discussion of the approaches commonly used to address overlap issues, see, for example, Fogarty et al. (2016); Dehejia and Wahba (2002); Stuart (2010).

In addition to these difficulties, the rich nature of text data also provides an opportunity in that it lends itself to more straightforward, intuitive assessments of match quality than are typically possible with quantitative data. Specifically, while it is difficult to interpret the quality of a matched pair of units using numerical diagnostics alone due to being high dimensional, the quality of a matched pair of text documents is generally intuitive to conceptualize. With text data, human readers can quickly synthesize the vast amount of information contained within the text and quantify match quality in a way that is directly interpretable. Thus, when performing matching with text data, final match quality can be established in a manner that aligns with human judgement about document similarity. This is a version of “thick description,” discussed in Rosenbaum (2010, pg. 322). This also allows for comparing different matching methods to each other in order to find methods that, potentially by using more sparse representations of text or more structured distance measures, can simultaneously find more matched documents while maintaining a high degree of match quality.

3 A framework for matching with text data

When performing matching, different choices at each step of the process will typically interact in ways that affect both the quantity and quality of matches obtained. This can lead to different substantive inferences about the causal effects of interest. Therefore, it is important to consider the combination of choices as a whole in any application of matching. Although some guidelines and conventional wisdom have been developed to help researchers navigate these decisions, no best practices have yet been identified in general, let alone in settings with text data, where, in addition to the usual choices for matching, researchers must also consider how to operationalize the data. We extend the classic matching framework to accommodate text documents by first identifying an appropriate low-dimensional, quantitative representation of the corpus, then applying the usual steps for matching using this representation. Our framework applies in settings where summary measures of text are used to define the confounding covariates, the outcomes, or both. In particular, to match documents based on aspects of text, we propose the following procedure:

  1. Choose a representation of the text and define explicitly the features that will be considered covariates and those, if any, that will be considered outcomes, based on this representation.222There are additional steps required when both the covariates and outcome are characterized by text; see Appendix A.4.

  2. Define a distance metric to measure the similarity of two documents based on their generated covariate values.

  3. Implement a matching procedure to generate a matched sample of documents.

  4. Evaluate match quality across the matched documents, and potentially repeat Steps 1-3 until consistently high quality matches are achieved.

  5. Estimate the causal effects of interest using the final set of matched documents.

In the subsections below, we briefly outline the choices available in steps 1-3 of the above procedure. These should be familiar to those with experience in standard matching, as many of the choices are directly parallel to a standard matching procedure. Next, in Section 4, we present an approach for step 4 based on a human evaluation experiment. For more thorough and mathematically precise descriptions of these various methods, see Appendix A.

3.1 Text representations

The representation

of a text document transforms an ordered list of words and punctuation into a vector of covariates, and is the most novel necessary component of matching with text. The most common text representation is as a “bag-of-words,” containing unigrams and often bigrams, collated into a term-document matrix (TDM); the TDM may also be rescaled according to Term Frequency-Inverse Document Frequency (TF-IDF) weighting. Without additional processing, however, these vectors are typically very long; more parsimonious representations involve calculating a document’s factor loadings from unsupervised learning methods like factor analysis or Structural Topic Models (STM)

(Roberts et al., 2016), or calculating a scalar propensity score for each document using the bag-of-words representation (Taddy, 2013). Finally, we also consider a Word2Vec representation (Mikolov et al., 2013), in which a neural network embeds words in a lower-dimensional space and a document’s value is the weighted average of its words.

Each of these methods involves a number of tuning parameters. When using the bag-of-words representation, researchers often remove very common and very rare words at arbitrary thresholds, as these add little predictive power, or choose to weight terms by their inverse document frequency; these preprocessing decisions can be very important (Denny and Spirling, 2018). Topic models such as the STM are similarly sensitive to these preprocessing decisions (Fan et al., 2017) and also require prespecifying the number of topics and selecting covariates, which are often unstable. Word2vec values depend on the dimensionality of the word vectors as well as the training data and the architecture of the neural network.

3.2 Distance metrics

Having converted the corpus into covariate representations, the second challenge is in comparing any two documents under the chosen representation to produce a measure of distance. The two main categories of distance metrics are exact (or coarsened exact) distances, and continuous distances. Exact distances consider whether or not the documents are identical in their representation. If so, the documents are a match. Coarsened exact distance bins each variable in the representation, then identifies pairs of documents which share the same bins. If the representation in question is based on a TDM, these methods are likely to find only a small number of high quality matches, given the large number of covariates that all need to agree either exactly or within a bin. The alternative to exact distance metrics are continuous distance metrics such as Euclidean distance, Mahalanobis distance, and cosine distance. Counter to exact and coarsened exact metrics, which identify matches directly, these metrics produce scalar values capturing the similarity between two documents.

3.3 Matching procedures

After choosing a representation and a distance metric, the choice of matching procedure often follows naturally, as is the case in standard matching analyses. Exact and coarsened exact distance metrics provide their own matching procedure, while continuous distance metrics require both a distance formula and a caliper

for specifying the maximum allowable distance at which two documents may be said to match. The calipers may be at odds with the desired number of matches, as some treated units may have no control units within the chosen caliper, and may subsequently be “pruned” by many common matching procedures. Alternatively, researchers may allow any one treated unit to match multiple controls, or may choose a greedy matching algorithm, but these decisions are largely contextual and depend on the causal quantities under investigation.

4 Experimental evaluation of text matching methods

In the previous section, we presented different forms of representations for text data and described a number of different metrics for defining distance using each type of representation. Any combination of these options could be used to perform matching. However, the quantity and quality of matches obtained depend heavily on the chosen representation and distance metric. For example, using a small caliper might lead to only a small number of nearly-exact matches, while a larger caliper might identify more matches at the expense of overall match quality. Alternatively, if CEM on a STM-based representation produces a large number of low-quality matches, applying the same procedure on a TDM-based representation may produce a smaller number of matches with more apparent similarities.

We investigate how this quantity versus quality trade-off manifests across different combinations of methods through an evaluation experiment performed with human subjects. Applying several variants of the matching procedure described in Section 3 to a common corpus, we explore how the quantity of matched pairs produced varies with different specifications of the representation and distance metric. Then, to evaluate how these choices affect the quality of matched pairs, we rely on evaluations of human coders.

In this study, we consider five distance metrics (Euclidean distance, Mahalanobis distance, cosine distance, distance in estimated propensity score, and coarsened exact distance), as well as 26 unique representations, including nine different TDM-based representations, 12 different STM-based representations, and five word embedding-based representations. Crossing these two factors produces 130 combinations, where each combination corresponds to a unique specification of the matching procedure described in Section 3. Among these combinations, 5 specifications are variants of the TIRM procedure developed in Roberts et al. (2018). Specifications of each of the procedures are provided in Appendix B.

To compare the different choices of representation and distance metric considered here, we apply each combination to a common corpus to produce a set of matched pairs corresponding to each of the 130 different specifications. Here, we use a corpus of news articles published from January 20, 2014 to May 9, 2015, representing the daily front matter content for each of two online news sources: Fox News () and CNN (). The news source labels were used as the treatment indicator, with for articles published by Fox News and for articles published by CNN. To implement this matching procedure, we first calculate the distances between all possible pairs of treated and control units based on the specified representation and distance metric. Each treated unit is then matched to all control units with whom its distance was within the specified caliper.333

For each of the combinations that did not use the CEM metric, the caliper was calculated as the 0.1th quantile of the distribution of distances under that combination for all 1796

1565 = 2,810,740 possible pairs of articles. Using this procedure, 13 of the original 130 specifications considered did not identify any matched pairs, The union of matched pairs identified by the remaining 117 procedures resulted in a final sample of 58,737 pairs of matched articles with a total of 30,647 unique pairs..

Each procedure identified between 41 and 1605 total pairs of matched articles, with an average of 502 pairs produced per matching procedure. These pairs covered between 69 to 2942 unique articles within the corpus. Specifically, each procedure identified one or more matches for between 34 (2%) and 1566 (87%) of the 1796 unique articles published by Fox News and identfied matches for between 20 (1%) and 1376 (88%) of the 1565 unique CNN articles.

Conversely, each of the 30,647 unique pairs of matched articles was identified, on average, by 1.91 of the 117 different procedures, with 6,910 (22.5%) of unique pairs matched by between two to 55 of the 117 procedures and the remaining 23,737 pairs matched by only one procedure. We view the frequency of each unique pair within the sample of 58,737 pairs identfied as a rough proxy for match quality because, ideally when performing matching, the final sample of matched pairs identified will be robust to different choices of the distance metric or representation. Thus, we expect that matched pairs that are identified by multiple procedures will have have higher subjective match quality than singleton pairs.

4.1 Measuring match quality

In standard applications of matching, if two units that are matched do not appear substantively similar, then any observed differences in outcomes may be due to poor match quality rather than the effect of treatment. Usual best practice is to calculate overall balance between the treatment and control groups, which is typically measured by the difference-in-means for all covariates of interest. If differences on all matched covariates are small in magnitude, then the samples are considered well-matched.

As previously discussed, to calculate balance in settings where the covariates are text data, these standard balance measures typically fail to capture meaningful differences in the text. Further, due to the curse of dimensionality in these settings, it is likely that at least some (and probably many) covariates will be unbalanced between treatment and control groups. Thus, to measure match quality we rely on a useful property of text: its ease of interpretability. A researcher evaluating two units that have been matched on, for example, demographic covariates, may be unable to verify the quality of a matched pair. However, human coders who are tasked with reading two matched text documents are amply capable of quantifying their subjective similarity. We leverage this property to measure match quality using an online survey of human respondents, where match quality is defined on a scale of zero (lowest) quality) to 10 (highest quality).

To obtain match quality ratings, we conducted a survey experiment using Amazon’s Mechanical Turk and DLABSS (Enos et al., 2016a). Respondents were first informed about the nature of the task and then given training444For training, participants were first informed about the nature of the task. Next, participants were presented with a scoring rubric and were informed to use this rubric as “a guide to help [them] determine the similarity of a pair of articles.” In the final component of training, participants were asked to read and score three pre-selected pairs of articles, which were chosen to represent pairings that we believe have match quality scores of zero, five, and ten, respectively. After scoring each training pair, participants were informed about the anticipated score for that pair and provided with an explanation for how that determination was made. on how to evaluate the similarity of two documents. After completing training, participants were then presented with a series of 11 paired newspaper articles, including an attention check and an anchoring question, and asked to assign a similarity rating. For each question, participants were instructed to read both articles in the pair and rate the articles’ similarity from zero to ten, where zero indicates that the articles are entirely unrelated and ten indicates that the articles are covering the exact same event. Snapshots of the survey are presented in Appendix C.

We might be concerned that an online convenience sample may not be an ideal population for conducting this analysis, and that their perceptions of article similarity might differ from the overall population, or from trained experts. To assess the reliability of this survey as an instrument for measuring document similarity, we leverage the fact that we performed two identical pilot surveys prior to the experiment using respondents from two distinct populations and found a high correlation () between the average match quality scores obtained from each sample. Additional details about this assessment are provided in Appendix D

. We take note that these populations, MTurkers and DLABSS respondents, are both regularly used as coders to build training data sets for certain tasks in machine learning; the hallmark of these tasks is that they are easily and accurately performed by untrained human respondents. We argue that this task of identifying whether two articles discuss related stories falls squarely in this category, and our intercoder reliability test in Appendix

D supports this argument.555For researchers interested in conducting their own text matching evaluation studies, we note that MTurk and DLABSS populations may not always be applicable, especialy in contexts where domain expertise is required.

In an ideal setting, for each unique matched pair identified using the procedure described above, we would obtain a sample of similarity ratings from multiple human coders. Aggregating these ratings across all pairs in a particular matched dataset would then allow us to estimate the average match quality corresponding to each of the 130 procedures considered, with the quality scores for the 13 procedures that identified no matches set to zero. Though this is possible in principle, to generate a single rating for each unique matched pair requires that a human coder read both documents and evaluate the overall similarity of the two articles. This can be an expensive and time-consuming task. Thus, in this study, it was not possible to obtain a sample of ratings for each of the 30,647 unique pairs.

Instead, we took a weighted sample of 500 unique pairs from the set of all pairs identified, where the sampling weights were calculated proportional to how often each pair was produced by each of 89666We exclude eight methods based on combinations of the STM representation with CEM as well as 20 word embedding-based methods in this experiment that we added after we had already conducted the evaluation study. However, because there is considerable overlap in the matched pairs identified by different methods (i.e., many pairs are identified using several different combinations of matching methods), the final sample of 500 contained between two to 135 pairs for 114 of the 130 methods ultimately considered. different combinations of methods. To reduce the influence of spurious matches, the sampling weights for singleton pairs (i.e., pairs that were found by only one of the 89 original matching procedures) were downweighted777It should be noted that this scheme intentionally induces selection bias into the sample by discouraging singleton pairs, which are expected to be low quality, in favor of pairs that are identified by multiple matching procedures, where greater variation in match quality is expected. However, fixing these sampling weights a-priori allows us to construct adjusted estimators of match quality that effectively remove this bias. by a factor of 5. In addition to the 500 sampled pairs, we also included a sample of 20 randomly selected pairs. We expect these randomly selected pairs to have few similarities, so ratings obtained from these pairs can be used to obtain a reference point for interpreting match quality scores. Each respondent’s set of nine randomly selected questions were drawn independently such that each pair would be evaluated by multiple respondents. Using this scheme, each of the 520 total pairs was evaluated by between six and ten different participants (average of 8.8). Question order was randomized, but the anchor was always the first question, and the attention check was always the fifth question.

We surveyed a total of 506 respondents. After removing responses from 71 participants who failed the attention check,888The attention check consisted of two articles with very similar headlines but completely different article text. The text of one article stated that this question was an attention check, and that the respondent should choose a score of zero. Participants who did not assign a score of zero on this question are regarded as having failed the attention check. all remaining ratings were used to calculate the average match quality for each of the 520 matched pairs evaluated as well as for each of the 89 original combinations of methods considered in the evaluation, where the contribution of each sampled pair to the overall measure of quality for a particular combination of methods was weighted according to its sampling weight. This inferential procedure is described more formally in Appendix E.

4.2 Results

4.2.1 Which automated measures are most predictive of human judgment about match quality?

Our primary research question concerns how unique combinations of text representation and distance metric contribute to the quantity and quality of obtained matches in the interest of identifying an optimal combination of these choices in a given setting. We can estimate the quality of the 89 matching methods considered in the evaluation experiment using weighted averages of the scores across the 520 matched pairs evaluated by human coders. However, it is also of general interest to be able to evaluate new matching procedures (including 28 of the 31 methods that identified at least one match but were not considered in the evaluation experiment) without requiring additional human experimentation. We also want to maximize the precision of our estimates for the the original 89 methods. To these ends, we examine if we can predict human judgment about match quality based on the distance scores generated by each different combination of one representation and one distance metric. If the relationship between the calculated match distance and validated match quality is strong, then we may be confident that closely-matched documents, as rated under that metric, would pass a human-subjects validation study.

To evaluate the influence of each distance score on match quality, we take the pairwise distances between documents for each of the 520 matched pairs used in the evaluation experiment under different combinations of the representations and distance metrics described in Section 3. After excluding all CEM-based matching procedures, under which all pairwise distances are equal to zero or infinity by construction, all distances were combined into a dataset containing 104 distance values corresponding to each of the 520 matched pairs. Figure 1 gives four examples of how these distances correlate with human ratings of document similarity, along with the fitted regression line obtained from quadratic regressions of average match quality on distance. Here, the strong correlation between the distance between a pair of documents and its human rating of match quality under these various combinations of representation and distance metric suggest that automated measures of match quality could be useful for predicting human judgement. Further, the strong relationship between the cosine distance metric calculated over a TDM-based representation provides additional evidence in favor of matching using this particular combination of methods. These findings also suggest that the increased efficiency acheived with TDM cosine matching is not attributable to the cosine distance metric alone, since the predictive power achieved using cosine distance on an STM-based representation is considerably lower than that based on a TDM-based representation.

Figure 1: Distance between documents and match quality based on the cosine distance measured over a TDM-based representation (top left) exhibit a stronger relationship than cosine distance measured over a STM-based representation (top right), and a much stronger relationship than the Mahalanobis distance measured over a TDM-based representation (bottom left) or a STM-based representation (bottom right).

To leverage the aggregate relationship of the various machine measures of similarity on match quality, we developed a model for predicting the quality of a matched pair of documents based on the 104 distance scores, which we then trained on the 520 pairs evaluated in our survey experiment. For estimation, we use the LASSO (Tibshirani, 1996), implemented with ten-fold cross validation as a gold-standard method for assessing out-of-sample predictive performance (Kohavi et al., 1995). Here, for each of the 520 pairs, the outcome was defined as the average of the ratings received for that pair across the human coders, and the covariates were the 104 distance measures. We also included quadratic terms in the model, resulting in a total of =208 terms. Of these, the final model obtained from cross-validation selected 20 terms with non-zero coefficients. However, our results suggest that the majority of the predictive power of this model comes from two terms: cosine distance over the the full, unweighted term-document matrix and cosine distance over an STM with 100 topics. Figure 2

shows the in-sample predictive performance of the model. To evaluate the sensitivity of this model to the chosen regularization scheme, we performed a similar analysis using ridge regression and found only a negligible difference in predictive performance (

=88.8%).

Figure 2:

Predicted quality estimated using ten-fold cross-validation on a LASSO regression has a correlation of 0.88 with observed quality, indicating high out-of-sample predictive accuracy.

The high predictive accuracy of our fitted model suggests that automated measures of similarity can be effectively used to evaluate new matched samples or entirely new matching procedures without requiring any additional human evaluation.999Since this model was trained on human evaluations of matched newspaper articles, extrapolating predictions may only be appropriate in settings with similar types of documents. However, our experimental framework for measuring match quality could be implemented using text data to build a similar predictive model in other contexts. We can also use it to enhance the precision of our estimates of match quality for the 89 methods considered in the evaluation experiment using survey adjustment methods.

4.2.2 Which methods make the best matching procedures?

To compare the performance of the final set of 130 matching procedures considered in our study, we use the predictive model described above to estimate the quality of all matched pairs that were not included in the evaluation experiment. Specifically, we use the trained model trained to predict the average match quality for each of the 30,647 unique matched pairs produced in our original sample of 58,737 matched pairs. These predicted scores are then used to calculate model-assisted estimates of average match quality for our original 89 methods and estimates of predicted average match quality for all other methods considered. In particular, by aggregating these estimates over all matched pairs found within a particular procedure, we can calculate the average match quality for each of the 130 matching procedures, where the average quality scores for the 13 procedures that identified no matches are set equal to zero. Figure 3 shows the relative performance of each of the 130 procedures in terms of average predicted match quality, with uncertainty intervals estimated using a parametric bootstrap (see Appendix E for details of this procedure).

Figure 3: Number of matches found versus average predicted match quality for the original 89 procedures considered in the evaluation experiment (black) and 41 additional procedures (green). Grey points indicate procedures with extreme reduction in information (e.g., procedures that match on only stop words). One procedure with many low quality pairs (C1) at coordinates (1605,1.43) is excluded from this plot. The dotted horizontal line shows the average match quality for the 20 randomly generated pairs.

The methods which generally produce the highest quality matches for our study are those based on cosine distance calculated over a TDM-based representation. The method that produces the most matches uses STM on ten topics with sufficient reduction and CEM in 2 bins and identifies 1605 matched pairs. However, this method is among the lowest scoring methods in terms of quality, with a sample-adjusted average match quality of 1.43. Conversely, a procedure that uses STM on 30 topics with sufficient reduction and CEM in 3 bins, appears to produce considerably higher quality matches, with an average match quality of 6.12, but identifies only 50 matched pairs. In comparison, a method that combines a bounded TDM with TF-IDF weighting with the cosine distance metric identified 582 matches with an average match quality of 7.65. This illustrates an important weakness of CEM: too few bins produce many low quality matches, while too many bins produce too few matches, even though they are high quality. While in many applications there may be a number of bins which produce a reasonable number of good quality matches, that is not the case in our setting. Here, two bins produce poor matches while three bins produce far too few. This tradeoff does not appear to be present for matching procedures using cosine distance with a TDM-based representation, which dominate in both number of matches found and overall quality of those matched pairs. In addition, the matching procedures based on this combination appear to be more robust to various the pre-processing decisions made when constructing the representation than procedures that use an alternative distance metric or representation.

Overall, our results indicate that matching on the full TDM produces both more and higher quality matches than matching on a vector of STM loadings when considering the content similarity of pairs of news articles. Moreover, TDM-based representations with cosine matching appear relatively robust to tuning parameters including the degree of bounding applied and the choice of weighting scheme. STM-based representations, on the other hand, appear to be somewhat sensitive to tuning parameters, with representations that include a large number of topics achieving higher average match quality than those constructed over a smaller number of topics. This result provides further support for the findings in Roberts et al. (2018). In that paper, the authors found that matching on more topics generally led to better results in terms of recovering pairs of nearly identical documents.

4.3 Guidance for practice and rules of thumb

Given these results, how should researchers approach text matching in practice? In this section we offer a set of guidelines and advice regarding the best methods identified within the context of our study, and speculate as to their generalizeability.

Before performing text matching, researchers should first think carefully and perform power analyses to estimate their desired sample size. In studies that aim to make precise inferences about the effects of a treatment or group membership, it is desirable use the largest sample sizes possible. However, in applications of text matching, researchers may often be forced to sacrifice a larger sample size in order to produce the best set of matches. This tradeoff between quantity and quality should be considered in advance and should inform the protocol for analyses of the matched data as well as the criteria for determining when a particular matched sample has achieved suitable balance. When in doubt, we recommend erring on the side of obtaining fewer matches with higher overall match quality in order to make the most reliable and conservative inferences. And while the specific criteria defining what constitutes suitable balance within a particular matched sample of documents will vary with context, we suggest researchers explore different procedures until obtaining a matched sample with average pair quality of at least 7 out of 10. This can typically be accomplished by adjusting the caliper used when matching to tune the quantity and quality of matched pairs identified. Overall, until the performance of different matching methods across different settings is better understood, we advocate conducting a human experiment on identified pairs using a variety of matching methods. These methods will be characterized by different quality versus quantity tradeoffs, and the researcher can then select which method is ideal.

Once the desired sample size and/or level of match quality has been established, the next step is to specify the representation and distance metric that will be used in the matching procedure. When choosing a representation, researchers need to consider what aspects of the text are confounding the outcome. For example, in our evaluation study that used matched pairs of news articles from Fox News and CNN, we were interested in identifying pairs of stories that were about the same general topic (e.g., plane crashes versus public policy) and that also utilized the same set of keywords (e.g., “AirAsia” or “Obama”). When the objective is to identify exact or nearly exact matches, we recommend using text representations that retain as much information in the text as possible. In particular, documents that are matched using the entire term-vector will typically be similar with regards to both topical content and usage of keywords, while documents matched using topic proportions may only be topically similar. To calculate the distances between pairs of documents, we suggest using cosine distance as the default metric, since it produces the most consistent, high quality matches in our evaluation experiment and appears relatively robust to the specification of representation. However, when performing text matching on large corpora, it may be preferable to use more representations of lower dimension, for example topic model loadings.

Because text matching may often require a number of design decisions on the part of the analyst, we want to emphasize the importance of using sensitivity analyses and performing robustness checks before making inferences and drawing substantive conclusions using matched documents. We demonstrate a series of sensitivity analyses and how they can be used to evaluate the strength of our applied results in Appendix F.

Finally, for future experimental evaluations based on human judgment of document similarity, it is vital to consider what aspect or characteristic of the text might be confounding the treatment and the outcome when designing the experiment. In our applied examples described in Section 5, we seek to control for both the subject matter and key details within matched documents; as such, we task respondents with matching on subject matter in our experimental evaluation. However, we note that it is the burden of the researcher to devise a human evaluation instrument which captures as much as possible of the confounding characteristics of the text in the given context.

5 Applications

5.1 Decomposing media bias

While American pundits and political figures continue to accuse major media organizations of “liberal bias,” after nearly two decades of research on the issue, scholars have yet to come to a consensus about how to measure bias, let alone determine its direction. A fundamental challenge in this domain is how to disentangle the component of bias relating to how a story is covered, often referred to as “presentation bias” (Groseclose and Milyo, 2005; Gentzkow and Shapiro, 2006; Ho et al., 2008; Gentzkow and Shapiro, 2010; Groeling, 2013), from the component relating to what is covered, also known as “selection bias” (Groeling, 2013) or “topic selection.” Thus, systematic comparisons of how stories are covered by different news sources (e.g., comparing the level of positive sentiment expressed in the article) may be biased by differences in the content being compared. We present a new approach for addressing this issue by using text matching to control for selection bias.

We analyze a corpus consisting of articles published during 2013 by each of 13101010The original data included 15 news sources, but BBC and The Chicago Tribune are excluded from this analysis due to insufficient sample sizes for these sources popular online news outlets. This data was collected and analyzed in Budak et al. (2016). The news sources analyzed here consist of Breitbart, CNN, Daily Kos, Fox News, Huffington Post, The Los Angeles TImes, NBC News, The New York Times, Reuters, USA Today, The Wall Street Journal, The Washington Post, and Yahoo. In addition to the text of each article, the data include labels indicating each articles’ primary and secondary topics, where these topics were chosen from a set of 15 possible topics by human coders in a separate evaluation experiment performed by Budak et al. (2016). The data also include two human-coded outcomes that measure the ideological position of each article on a 5-point Likert scale. Specifically, human workers tasked with reading and evaluating the articles were asked “on a scale of 1-5, how much does this article favor the Republican party?”, and similarly, “on a scale of 1-5, how much does this article favor the Democratic party?”

To perform matching on this data, we use the optimal combination of methods identified through our evaluation experiment: cosine matching on a bounded TDM.111111Since the outcomes of interest in this analysis are human-coded measures of favorability toward democrats and republicans, we limit the vocabulary of the TDM to include only nouns and verbs to avoid matching on aspects of language that may be highly correlated with these outcomes. Because in this example we have a multi-valued treatment with 13 levels, each representing a different news source, we follow the procedure for template matching121212To implement the template matching procedure, we first generate a template sample of articles chosen to be the most representative of the corpus in terms of the distribution of primary topics among 500 candidate samples of this size. Once this template is chosen, for each treatment level (i.e., news source), we then perform optimal pair matching within primary topics to identify a sample of 150 articles from that source that most closely match the template sample with regards to cosine distance calculated over the TDM. Iterating through each of the 13 target sources, this produces a final matched sample of matched articles. described in Silber et al. (2014) to obtain matched samples of 150 articles across all treatment groups. In summary, the template matching procedure first finds a representative set of stories across the entire corpus, and uses that template to find a sample of similar articles within each source that collectively cover this canonical set of topics. This allows us to identify a set of articles sampled from each source that are similar to the same template and therefore similar to each other.

Before matching, our estimates of a news source’s average favorability is a measure of overall bias, which includes biases imposed through differential selection of content to publish as well as biases imposed through the language and specific terms used when covering the same content. Estimates after matching control for selection biases, so that any remaining differences can be attributed to presentation bias. The difference between estimates of average favorability before matching (overall bias) and estimates after matching (presentation bias) therefore represent the magnitude of selection biases imposed by each source. Large differences between pre- and post-matched estimates indicate a stronger influence of topic selection bias relative to presentation bias.

In Figure 4, we calculate the average favorability toward Democrats (blue) and Republicans (red) for each news source overall, and the average favorability among the template matched documents. Arrows begin at the average score before matching, and terminate at the average score after matching. The length of the arrows can be interpreted as the estimated magnitude of the bias of each source that is attributable to differences in selection. We performed a series of sensitivity checks to assess how our results and subsequent conclusions may change when using different specifications of the matching procedure and/or different choices of template sample. Details of these sensitivity analyses are provided in Appendix  F.

Figure 4: Estimates of average favorability toward Democrats (blue) and Republicans (red) for each source both before and after matching.

When evaluating favorability toward Democrats, we note that most news sources demonstrate partisan neutrality at around , indicating an average response of “neither favorable nor unfavorable.” Before matching, the Wall Street Journal (WSJ), Washington Post, Fox News and Breitbart all have significant biases against Democrats. After matching, all news sources appear less biased than before except CNN, NBC, and the Los Angeles Times, which become moderately less favorable toward Democrats. The New York Times and Daily Kos change after matching from moderately biased to effectively unbiased; for these sources, the biases toward Democrats present within the unmatched samples could be entirely attributable to differences in topic selection. This suggests that for these sources, partisan biases towards Democrats are imposed by selecting stories that are inherently biased toward Democrats. On the other hand, the Wall Street Journal, Fox News, and Breitbart change only little after matching, which suggests that any biases towards Democrats imposed by these sources are primarily explained only by differences in presentation of content. That is, these sources may impose their own partisan biases toward Democrats by using biased language when presenting canonical stories, rather than by choosing to cover stories that are inherently favorable toward Democrats.

We find parallel results when examining average positive sentiment toward Republicans. Most sources are relatively unbiased except Daily Kos, which is strongly anti-Republican both before and after matching. After matching, we find that the New York Times, Wall Street Journal, USA Today, Reuters, and CNN become more unfavorable to Republicans. Here, the estimates of favorability after matching also indicate that Fox News and Breitbart are much closer to neutrality when covering a canonical set of stories, while Daily Kos’s negative bias toward Republicans reduces little when controlling for selection biases. The Los Angeles Times’ bias diminishes by approximately half after matching.

5.2 Improving covariate balance in observational studies

In our second application, we demonstrate how text matching can be used to strengthen inferences in observational studies with text data. Specifically, we show that text matching can be used to control for confounders measured by features of the text that would otherwise be missed using traditional matching schemes.

We use a subset of the data first presented in Feng et al. (2018), which conducted an observational study designed to investigate the causal impact of bedside transthoracic echocardiography (TTE), a tool used to create pictures of the heart, on the outcomes of adult patients in critical care who are diagnosed with sepsis. The data were obtained from the Medical Information Mart for Intensive Care (MIMIC) database (Johnson et al., 2016) on 2,427 patients seen with a diagnosis of sepsis in the medical and surgical intensive care units at a Massachusetts Institute of Technology university hospital located in Boston, Massachusetts. Within this sample, the treatment group consists of 1,245 patients who received a TTE during their stay in the ICU (defined by time stamps corresponding to times of admission and discharge) and the control group is comprised of 1,192 patients who did not receive a TTE during this time. For each patient we observe a vector of pre-treatment covariates including demographic data, lab measurements, and other clinical variables. In addition to these numerical data, each patient is also associated with a text document131313Because the amount of text data observed for each patient is highly variable, we limit the document length in our analyses to the first 500 words of each document, where the text is ordered chronologically based on corresponding time stamps. containing progress notes written by physicians and nursing staff as well as written evaluations from specialists at the time of ICU admission. The primary outcome in this study was 28-day mortality from the time of ICU admission.

Because the treatment in this study was not randomly assigned to patients, it is possible that patients in the treatment and control groups may differ systematically in ways that affect both their assignment to treatment versus control and their 28-day mortality. For instance, patients who are in critical condition when admitted into the ICU may die before treatment with a TTE has been considered. Similarly, patients whose health conditions quickly improve after admission may be just as quickly discharged. Therefore, in order to obtain unbiased estimates of the effects of TTE on patient mortality, it is important to identify and appropriately adjust for any potentially confounding variables such as health severity at the time of admission.

We apply two different matching procedures to this data: one that matches patients only on numerical data and ignores the text data, and one that matches patients using both the numerical and text data. In the first procedure, following Feng et al. (2018), we match treated and control units using nearest-neighbor matching on estimated propensity scores141414

Estimated propensity scores are calculated by fitting a logistic regression of the indicator for treatment assignment (receipt of TTE) on the observed numerical covariates.

. We enforce a propensity score caliper equal to 0.1 standard deviations of the estimated distribution, which discards any treated units for whom the nearest control unit is not within a suitable distance. In the second procedure, we perform nearest-neighbor text matching within propensity score calipers. Here, each treated unit is matched to its nearest control based on the cosine distance calculated over a bounded

151515Here the TDM is bounded to exclude extremely rare and extremely frequent terms, defined operationally as terms that appear in less than four or more than 1000 documents within this corpus. TDM, where treated units whose nearest control is outside the specified caliper are discarded. Intuitively, this procedure works by first reducing the space of possible treated-control pairings in a way that ensures adequate balance on numerical covariates. Performing text matching within this space then produces matched samples that are similar with respect to all observed covariates, including the original observed covariates and any variables that were not recorded during the study but can be estimated by summary measures of the text.

Figure 5 shows the covariate balance between treatment and control groups on both quantitative and text-based covariates before matching, after propensity score matching (PSM) on numeric covariates alone, and after cosine matching on the TDM within propensity score calipers, and Table 1 summarizes the survival rates in the treatment and control groups within each sample. Here, each of the six text-based covariates are calculated using summary measures161616The variables lasix, respiratory, cardiology and critical

are binary variables indicating whether any terms with these words (root terms) were used in the text associated with each patient. The

procedure variable captures the number of references to medical procedures observed for each patient, and document length is defined as the number of words observed for each patient. based on word-counts from the patient-level text documents. These variables represent potentially confounding variables that may be captured within the text.

Figure 5: Standardized differences in means between treatment and control groups on 26 numerical covariates and 6 text-based covariates (denoted by *) before matching (gray), after propensity score matching (red), and after text matching (blue).
Procedure Survival Rate Difference
Treatment Control
Before Matching 72.5% 71.4% 1.1%
PSM 72.5% 69.6% 2.9%
Text Matching 72.5% 68.4% 4.1%
Table 1: Survival rates for treatment and control groups and estimated treatment effects before and after propensity score matching (PSM) and text matching within propensity score calipers.

These results highlight the importance of conditioning on all available data when making inferences using observational data. While PSM is able to adequately balance the numerical covariates and some of the text-based covariates, it fails to sufficiently adjust for differences between treatment and control groups on a number of potential confounders captured by the text. For instance, both the unmatched data and the matched sample generated using PSM have large imbalances between treatment and control groups on references to Lasix, a medication commonly used to treat congestive heart failure. In the unmatched sample, only 10% of treated units have documents containing references to this medication compared to 28% of control units who are associated with the medication. Matching on the estimated propensity scores reduces this imbalance only slightly, while cosine matching within propensity score calipers shows a considerable improvement in the balance achieved between treatment groups on this variable. Incorporating the text data into the matching procedure leads to similar improvements in balance for the other five text-based variables while also maintaining suitable overall balance on the numerical covariates. Moreover, the matched sample identified using text matching shows the largest estimated treatment effect, indicating that TTE may be even more effective for reducing patient mortality than previous results have suggested (Feng et al., 2018).

6 Discussion

In this paper we have made three primary contributions. First, we have provided guidance for constructing different text matching methods and evaluating the match quality of pairs of documents identified using such methods. Second, we empirically evaluated a series of candidate text matching procedures constructed using this framework along with the methods developed in Roberts et al. (2018). Third, we have applied our methods to a data set of news media in order to engage with a long-standing theoretical debate in political science about the composition of bias in news, and to an observational study comparing the effectiveness of a medical intervention.

Text matching is widely applicable in the social sciences. Roberts et al. (2018) show how text matching can produce causal estimates in applications such as international religious conflict, government-backed internet censorship, and gender bias in academic publishing. We believe the guidance for best practices presented in this paper will help expand the scope and usability of text matching even further and will facilitate investigation of text data across a wide variety of disciplines. For instance, the methods described here could improve the state of the art techniques for plagiarism detection and text reuse, techniques for which there is active use in political sciene. By identifying similar bills to an original legislative proposal, our method could improve upon work tracking the spread of policy through state legislatures (Kroeger, 2016); and by comparing social media posts to a source article, our method could detect the dispersion of false news topics through a social network. Secondly, our matching distances may be used to construct networks of lexical similarity, for instance of news sources, politicians, or national constitutions. As well, the matching distances we consider could themselves resolve measurement problems in cases where lexical divergence is the quantity of interest, for example in cases of studying ideological polarization using text data (Peterson and Spirling, 2018).

We urge, however, that researchers consider how similar their use cases are to ours when extrapolating from our results. While we find consistent evidence that our methods produce high quality results across our applied examples, we cannot conclusively state that cosine distance and TDM representations are the best choices for any particular example. We encourage future researchers to conduct their own evaluation studies, especially when the dimension of textual similarity of interest is other than content similarity, for example stylistic, topic, tone, or semantic similarity. We hope such future evaluations in connection with this one will advance our collective understanding of best practices in this important domain.

References

  • Austin (2009) Austin, P. C. (2009). Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Statistics in Medicine 28(25), 3083–3107.
  • Bengio et al. (2003) Bengio, Y., R. Ducharme, P. Vincent, and C. Jauvin (2003). A neural probabilistic language model. Journal of Machine Learning Research 3, 1137–1155.
  • Blei (2012) Blei, D. M. (2012). Probabilistic topic models. Communications of the ACM 55(4), 77–84.
  • Blei et al. (2003) Blei, D. M., A. Y. Ng, and M. I. Jordan (2003). Latent dirichlet allocation. Journal of Machine Learning Research 3, 993–1022.
  • Budak et al. (2016) Budak, C., S. Goel, and J. M. Rao (2016). Fair and balanced? quantifying media bias through crowdsourced content analysis. Public Opinion Quarterly 80, 250–271.
  • Dai et al. (2015) Dai, A. M., C. Olah, and Q. V. Le (2015). Document embedding with paragraph vectors. arXiv preprint arXiv:1507.07998.
  • D’Amour et al. (2017) D’Amour, A., P. Ding, A. Feller, L. Lei, and J. Sekhon (2017). Overlap in observational studies with high-dimensional covariates. arXiv preprint arXiv:1711.02582.
  • Dehejia and Wahba (2002) Dehejia, R. H. and S. Wahba (2002). Propensity score-matching methods for nonexperimental causal studies. Review of Economics and Statistics 84(1), 151–161.
  • Denny and Spirling (2018) Denny, M. J. and A. Spirling (2018). Text preprocessing for unsupervised learning: why it matters, when it misleads, and what to do about it. Political Analysis, 1–22.
  • Egami et al. (2017) Egami, N., C. J. Fong, J. Grimmer, M. E. Roberts, and B. M. Stewart (2017). How to make causal inferences using texts. arXiv preprint.
  • Enos et al. (2016a) Enos, R. D., M. Hill, and A. M. Strange (2016a). Online volunteer laboratories for social science research.
  • Enos et al. (2016b) Enos, R. D., M. Hill, and A. M. Strange (2016b). Voluntary digital laboratories for experimental social science: The harvard digital lab for the social sciences. Working Paper.
  • Fan et al. (2017) Fan, A., F. Doshi-Velez, and L. Miratrix (2017). Promoting domain-specific terms in topic models with informative priors. arXiv preprint arXiv:1701.03227.
  • Feng et al. (2018) Feng, M., J. McSparron, D. T. Kien, D. Stone, D. Roberts, R. Schwartzstein, A. Vieillard-Baron, and L. A. Celi (2018). When more is not less: A robust framework to evaluate the value of a diagnostic test in critical care. Submitted.
  • Fogarty et al. (2016) Fogarty, C. B., M. E. Mikkelsen, D. F. Gaieski, and D. S. Small (2016). Discrete optimization for interpretable study populations and randomization inference in an observational study of severe sepsis mortality. Journal of the American Statistical Association 111(514), 447–458.
  • Gentzkow and Shapiro (2006) Gentzkow, M. and J. M. Shapiro (2006). Media bias and reputation. Journal of Political Economy 114(2), 280–316.
  • Gentzkow and Shapiro (2010) Gentzkow, M. and J. M. Shapiro (2010). What drives media slant? evidence from us daily newspapers. Econometrica 78(1), 35–71.
  • Groeling (2013) Groeling, T. (2013). Media bias by the numbers: Challenges and opportunities in the empirical study of partisan news. Annual Review of Political Science 16.
  • Groseclose and Milyo (2005) Groseclose, T. and J. Milyo (2005). A measure of media bias. The Quarterly Journal of Economics 120(4), 1191–1237.
  • Gu and Rosenbaum (1993) Gu, X. S. and P. R. Rosenbaum (1993). Comparison of multivariate matching methods: Structures, distances, and algorithms. Journal of Computational and Graphical Statistics 2(4), 405–420.
  • Ho et al. (2008) Ho, D. E., K. M. Quinn, et al. (2008). Measuring explicit political positions of media. Quarterly Journal of Political Science 3(4), 353–377.
  • Holland (1986) Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association 81(396), 945–960.
  • Iacus et al. (2012) Iacus, S. M., G. King, G. Porro, and J. N. Katz (2012). Causal inference without balance checking: Coarsened exact matching. Political Analysis, 1–24.
  • Imai et al. (2008) Imai, K., G. King, and E. A. Stuart (2008). Misunderstandings between experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society: Series A 171(2), 481–502.
  • Johnson et al. (2016) Johnson, A. E., T. J. Pollard, L. Shen, H. L. Li-wei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark (2016). Mimic-iii, a freely accessible critical care database. Scientific data 3, 160035.
  • Kohavi et al. (1995) Kohavi, R. et al. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. In Ijcai, Volume 14, pp. 1137–1145. Montreal, Canada.
  • Kroeger (2016) Kroeger, M. A. (2016). Plagiarizing policy: Model legislation in state legislatures. Princeton typescript.
  • Kusner et al. (2015) Kusner, M., Y. Sun, N. Kolkin, and K. Weinberger (2015). From word embeddings to document distances. In International Conference on Machine Learning, pp. 957–966.
  • Le and Mikolov (2014) Le, Q. and T. Mikolov (2014). Distributed representations of sentences and documents. In International Conference on Machine Learning, pp. 1188–1196.
  • Mikolov et al. (2013) Mikolov, T., I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119.
  • Pennington et al. (2014) Pennington, J., R. Socher, and C. Manning (2014). Glove: Global vectors for word representation. In

    Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)

    , pp. 1532–1543.
  • Peterson and Spirling (2018) Peterson, A. and A. Spirling (2018). Classification accuracy as a substantive quantity of interest: Measuring polarization in westminster systems. Political Analysis 26(1), 120–128.
  • Roberts et al. (2016) Roberts, M. E., B. M. Stewart, and E. M. Airoldi (2016). A model of text for experimentation in the social sciences. Journal of the American Statistical Association 111(515), 988–1003.
  • Roberts et al. (2018) Roberts, M. E., B. M. Stewart, and R. A. Nielsen (2018). Adjusting for confounding with text matching. arXiv preprint.
  • Roberts et al. (2016) Roberts, M. E., B. M. Stewart, and D. Tingley (2016). Navigating the local modes of big data. Computational Social Science 51.
  • Rosenbaum (1989) Rosenbaum, P. R. (1989). Optimal matching for observational studies. Journal of the American Statistical Association 84(408), 1024–1032.
  • Rosenbaum (1991) Rosenbaum, P. R. (1991). A characterization of optimal designs for observational studies. Journal of the Royal Statistical Society. Series B (Methodological), 597–610.
  • Rosenbaum (2002) Rosenbaum, P. R. (2002). Observational studies. In Observational Studies, pp. 1–17. Springer.
  • Rosenbaum (2010) Rosenbaum, P. R. (2010). Design of observational studies. Springer.
  • Rosenbaum and Rubin (1983) Rosenbaum, P. R. and D. B. Rubin (1983). The central role of the propensity score in observational studies for causal effects. Biometrika 70(1), 41–55.
  • Rosenbaum and Rubin (1985) Rosenbaum, P. R. and D. B. Rubin (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician 39(1), 33–38.
  • Rubin (1973a) Rubin, D. B. (1973a). Matching to remove bias in observational studies. Biometrics, 159–183.
  • Rubin (1973b) Rubin, D. B. (1973b). The use of matched sampling and regression adjustment to remove bias in observational studies. Biometrics, 185–203.
  • Rubin (1978) Rubin, D. B. (1978). Bias reduction using mahalanobis metric matching. ETS Research Report Series 1978(2).
  • Rubin (2006) Rubin, D. B. (2006). Matched sampling for causal effects. Cambridge University Press.
  • Rubin (2007) Rubin, D. B. (2007). The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials. Statistics in Medicine 26(1), 20–36.
  • Salton (1991) Salton, G. (1991). Developments in automatic text retrieval. Science, 974–980.
  • Salton and McGill (1986) Salton, G. and M. J. McGill (1986). Introduction to modern information retrieval. McGraw-Hill, Inc.
  • Sarndal et al. (2003) Sarndal, C.-E., B. Swensson, and J. Wretman (2003). Model assisted survey sampling. Springer.
  • Silber et al. (2014) Silber, J. H., P. R. Rosenbaum, R. N. Ross, J. M. Ludwig, W. Wang, B. A. Niknam, N. Mukherjee, P. A. Saynisch, O. Even-Shoshan, R. R. Kelz, et al. (2014). Template matching for auditing hospital cost and quality. Health Services Research 49(5), 1446–1474.
  • Smith (1997) Smith, H. L. (1997). Matching with multiple controls to estimate treatment effects in observational studies. Sociological Methodology 27(1), 325–353.
  • Stuart (2010) Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical Science 25(1).
  • Taddy (2013) Taddy, M. (2013). Multinomial inverse regression for text analysis. Journal of the American Statistical Association 108(503), 755–770.
  • Tibshirani (1996) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267–288.
  • Zubizarreta et al. (2014) Zubizarreta, J. R., D. S. Small, and P. R. Rosenbaum (2014). Isolation in the construction of natural experiments. The Annals of Applied Statistics, 2096–2121.

Appendix A Text representations and distance metrics

In Section 3 we describe a framework for text matching involving choosing both a text representation and a distance metric; we then briefly outline the options for each. Here we expand that discussion with mathematical formality and greater precision.

a.1 Choosing a representation

To operationalize documents for matching and subsequently performing statistical analyses, we must first represent the corpus in a structured, quantitative form. There are two important properties to consider when constructing a representation for text with the goal of matching. First, the chosen representation should be sufficiently low-dimensional such that it is practical to define and calculate distances between documents. If a representation contains thousands of covariates, calculating even a simple measure of distance may be computationally challenging or may suffer from the curse of dimensionality. Second, the chosen representation should be meaningful; that is, it should capture sufficient information about the corpus so that matches obtained based on this representation will be similar in some clear and interpretable way. For example, many common data reduction techniques such as principal components analysis (PCA) are often used in text analysis to build a low-dimensional representation of the corpus (e.g., by representing each document by its estimated value of the first and second principal component); however, if this representation is not directly interpretable (which is often the case, particularly in applications of PCA), then matches obtained using this representation may have no interpretable similarities. In the same way, we could construct a nonsensical representation, perhaps one that simply counts the number of punctuation marks in each document, and match based on this representation. Resulting inferences on the treatment effect would control for confounding due to differential use of punctuation, but this would likely not be useful in practice. Thus, the selection on observables assumption needs to plausibly hold conditional on the representation, not the raw text. In summary, matching is only a useful tool for comparing groups of text documents when the representation defines covariates that contain useful information about systematic differences between the groups. In general, it is not useful to control for potential confounders that we cannot understand.

In this paper, we explore three common and meaningful representations: the term-document matrix (TDM), which favors retaining more information about the text at the cost of dimensionality, statistical topic models, which favor dimension reduction at the potential cost of information, and neural network embeddings, which fall somewhere in between. There are a number of alternative text representations that could also be used to perform matching within our framework, including other representations based on neural networks (Bengio et al., 2003) or those constructed using document embeddings (Le and Mikolov, 2014; Dai et al., 2015), but these are left as a topic for future research.

a.1.1 Representations based on the term-document matrix

Perhaps the simplest way to represent a text corpus is as a TDM. Under the common “bag-of-words” assumption, the TDM considers two documents identical if they use the same terms with the same frequency, regardless of the ordering of the terms (Salton and McGill, 1986). When matching documents, it is intuitive that documents that use the same set of terms at similar rates should be considered similar, so the TDM provides a natural construction for representing text with the goal of matching. However, the dimensionality of the TDM poses a challenge for performing matching.

In particular, when the vocabulary is defined over unigrams alone, text corpora typically contain vocabularies with tens of thousands of terms. Thus, when using a TDM to represent the corpus, defining a distance metric is made difficult due to the number of covariates that must be considered. Another main challenge that arises when using the TDM to represent the corpus is that many of the terms within the vocabulary may actually contain little information relevant for determining document similarity. For example, some high-frequency terms (e.g., the, and), commonly referred to as “stopwords” in the text analysis literature, that are used with similar rates of frequency across all documents in the corpus are typically not useful for measuring document similarity. Rare terms that are used in only a very small number of documents within the corpus may have similarly low utility in the matching process. These terms only contribute to increasing the dimensionality of the TDM without providing much additional information about the documents.

To address this issue, one technique that can be used to reduce the influence of stopwords and other low-information terms is to rescale the columns of the TDM using a scheme such as TF-IDF scoring (Salton, 1991). Through rescaling, the TDM can be reconstructed to better capture term usage within each document relative to usage across the corpus as a whole. Proceeding with matching using a weighted TDM could alleviate the stopword problem to some extent, but does not fully address the issue of dimensionality. A second step one might take is to simply drop high-frequency, low-information stopwords and rare terms from the representation. This can be accomplished by subsetting the vocabulary over which the TDM is defined so as to include, for example, only those terms that appear in at least 1% and no more than 50% of documents in the corpus. Subsetting the vocabulary in this way considerably reduces the dimension of the resulting TDM, and mitigates the influence of low-information terms when matching. While reducing the vocabulary in this way can provide drastic reductions in the dimension of the resulting TDM, it should be noted that in large corpora, a bounded TDM may still have a dimension in the tens of thousands, putting us in the setting of high-dimensional matching, a setting known to be difficult (Roberts et al., 2018).

a.1.2 Representations based on statistical topic models

An alternative representation for text, popular in the text analysis literature, is based on statistical topic models (Blei, 2012), e.g., LDA (Blei et al., 2003) and STM (Roberts et al., 2016). The main argument for matching using a topic-model-based representation of text is that document similarity can adequately be determined by comparing targeted aspects of the text rather than by comparing the use of specific terms. That is, topic-model-based representations imply that two documents are similar if they cover a fixed number of topics at the same rates.

The LDA framework posits distinct topics over a corpus, where each topic is a mixture over the vocabulary defined by the corpus and each document in the corpus is represented as a mixture over topics, denoted , where is the proportion of document that can be attributed to topic . Because these proportions are unknown, they must be estimated using a fitted topic model as . The STM formulation of topic models is an extension to the LDA model that reparameterizes the prior distributions in order to allow observed covariates to influence both topical prevalence (topic proportions), , and topical content, (how the vocabulary characterizes each topic). Roberts et al. (2018) advocates using the STM formulation rather than the classical LDA model in applications of matching because it allows us to leverage information about how treatment assignment may relate to the use of specific terms within topics to generate more precise estimates of the topic proportions that will be used for matching171717For estimation of an STM in applications of matching, a model is specified for the corpus using the treatment indicator, , as a covariate for both topical prevalence and topical content. Specification of the topical prevalence covariate allows the treatment and control groups to have different estimated distributions of the rates of coverage of each topic. Similarly, specification of the content covariate implies that the same topic may have different estimated distributions of words depending on treatment assignment. After fitting an STM with this specification, the vector of fitted topic proportions, , is re-estimated for all control documents as if they were treated. This important step is implemented to ensure that topics are comparable between treatment and control groups, and is consistent with inference on a causal estimand defined as the ATT..

The main challenge that arises when matching using a representation built from a topic model, based on LDA, STM, or any of a number of related variants, is the instability of the estimated topic proportions. Consistent estimation of topic proportions is notoriously difficult due to issues with multimodality of topic models and gives rise to a number of issues for applications of matching in practice (Roberts et al., 2016). Matching on estimated topic proportions also requires the assumption that all of the information that is relevant for determining document similarity is completely captured within the estimated topics. If an estimated topic model does not reflect meaningful themes within the text, then matching on estimated topic proportions may not be useful for generating samples of documents that appear similar when evaluated manually by a human reader. This strong assumption is less necessary when matching using a representation built from the TDM. Of course, when the assumption holds, topic models provide an efficient strategy for considerably reducing the dimension of the covariates while retaining all information that is relevant for matching. In contrast to the tens of thousands of covariates typically defined using a representation based on the TDM, representations built using topic models typically contain no more than a few hundred covariates at most.

a.1.3 Representations based on neural network embeddings

Mikolov et al. (Mikolov et al., 2013) introduce a neural network architecture to embed words in an dimensional space based on its usage and the words which commonly surround it. This architecture has proven remarkably powerful with many intriguing properties. For example, it performs very well in a series of “linguistic algebra” tasks, successfully solving questions like “Japan” “sushi” “Germany” “bratwurst.”

a.1.4 Propensity scores

When matching in settings with multiple covariates, a common technique is to first perform dimension reduction to project the multivariate covariates into a univariate space. A popular tool used for this purpose is the propensity score, defined as the probability of receiving treatment given the observed covariates (Rosenbaum and Rubin, 1983). Propensity scores summarize all of the covariates into one scalar, and matching is then performed by identifying groups of units with similar values of this score. In practice, propensity scores are generally not known to the researcher and must be estimated using the observed data. This is typically performed using logistic regression.

When applied to text, propensity scores can be used to further condense the information within a chosen higher-dimensional representation into a summary of only the information that is relevant for determining treatment assignment. We can then performing match on this univariate summary. Here, we consider matching documents on estimated propensity scores using both TDM-based and STM-based representations. For STM-based representations where the number of topics, , is less than the number of documents in the corpus, , standard techniques can be used to estimate propensity scores. In this paper, we use simple logistic regression of the treatment indicator on the estimated document-level topic proportions to calculate the estimated propensity scores for each document. For TDM-based representations, because the size of the vocabulary is typically much larger than the number of documents within a corpus, standard techniques cannot be employed. To overcome this issue, we use Multinomial Inverse Regression (MNIR; Taddy, 2013), which provides a novel estimation technique for performing logistic regression of phrase counts from the TDM onto the treatment indicator. After estimating this model, we can calculate a sufficient reduction score that, in principle, will contain all the information from the TDM that is relevant for predicting treatment assignment. Performing a forward regression of the treatment indicator on this sufficient reduction score produces the desired propensity score estimates.

a.2 Design choices for representations

Here we discuss a number of design choices that are required for the different representations considered in our study.

TDM-based representations.

Each of the TDM-based representations is characterized by a bounding scheme, which determines the subset of the vocabulary that will be included in , and a weighting scheme, which determines the numerical rule for how the values of are measured. We consider standard term-frequency (TF) weighting, TF-IDF weighting, and L2-rescaled TF-IDF weighting. We also consider a number of different screening schemes, including no screening, schemes that eliminate high and low frequency terms, and schemes that consider only high and low frequency terms.

STM-based representations.

Each STM-based representation is characterized by a fixed number of topics (=10, 30, 50, or 100) and takes one of three distinct forms: 1) the vector of estimated topic proportions (“S1”), 2) the vector of estimated topic proportions and the SR score (“S2”), or 3) a coarsened version of the vector of estimated topic proportions (“S3”). This coarsened representation is constructed using the following procedure. For each document, we first identify the three topics with the largest estimated topic proportions. We retain and standardize these three values and set all remaining topic proportions equal to 0, so that the resulting vector of coarsened topic proportions, , contains only three non-zero elements. We then calculate the “focus” of each document, denoted by , a metric we define as the proportion of topical content that is explained by the three most prominent topics. Focus scores close to one indicate content that is highly concentrated on a small number of topics (e.g., a news article covering health care reform may have nearly 100% of its content focused on the topics of health and policy); conversely, focus scores close to zero indicate more general content covering a wide range of topics (e.g., a news article entitled “The ten events that shaped 2017” may have content spread evenly across ten or more distinct topics). To estimate this score for each document, we take the sum of the raw values of the three non-zero topic proportions identified as above (i.e., where is the th order statistic of the vector ). Appending this estimated focus score to the coarsened topic proportion vector produces the final -dimensional representation.

TIRM representations.

The TIRM procedure of Roberts et al. (2018) uses an STM-based representation with an additional representation based on document-level propensity scores estimated using the STM framework. These separate representations are then combined within the TIRM procedure using a CEM distance. Each variant of the TIRM procedure considered in this paper is characterized by a fixed number of topics and a set coarsening level (2 bins, 3 bins, or 4 bins).

Word Embedding representations.

Google and Stanford University have produced a variety of pre-trained word embedding models. Google’s GoogleNews model, where each word vector is length 300 using a corpus of 100 billion words, draws from the entire corpus of Google News; this corpus is therefore extremely well-suited to our analysis. As well, we consider several of Stanford’s GloVe embeddings (Pennington et al., 2014). In particular, we employ their models with word vectors of length 50, 100, 200, and 300. For each of these five embeddings, we produce document-level vectors by taking the weighted average of all word vectors in a document (Kusner et al., 2015).

a.3 Defining a distance metric

After a representation is chosen, applying this representation to the corpus generates a finite set of numerical covariate values associated with each document (i.e., denotes the covariates observed for document for all ). The next step in the matching procedure concerns how to use these covariate values to quantify the similarity between two documents. There are two main classes of distance metrics. Exact and coarsened exact distances regard distances as binary: the distance between two units is either zero or infinity, and two units are eligible to be matched only if the distance between them is equal to zero. Alternatively, continuous distance metrics define distance on a continuum, and matching typically proceeds by identifying pairs of units for whom the calculated distance is within some allowable threshold (“caliper”).

a.3.1 Exact and coarsened exact distances

The exact distance is defined as:

Matching over this metric (exact matching) generates pairs of documents between treatment and control groups that match exactly on every covariate. Although this is the ideal, exact matching is typically not possible in practice with more than a few covariates. A more flexible metric can be defined by first coarsening the covariate values into “substantively indistinguishable” bins, then using exact distance within these bins (Iacus et al., 2012). For example, using a topic-model-based representation, one might define a coarsening rule such that documents will be matched if they share the same primary topic (i.e., if the topic with the maximum estimated topic proportion among the topics is the same for both documents). Roberts et al. (2018) advocates using CEM for matching documents based on a representation built using an STM, but, in principle, this technique can also be used with TDM-based representations. For example, one might coarsen the term counts of a TDM into binary values indicating whether each term in the vocabulary is used within each document. Though it is possible in principle, coarsening does not scale well with the dimension of the covariates and so may not be practical for matching with TDM-based representations. This type of distance specification may also create sensitivities in the matching procedure, since even minor changes in the coarsening rules can dramatically impact the resulting matched samples.

a.3.2 Continuous distances

Various continuous distance metrics can be used for matching, including linear distances based on the (estimated) propensity score or best linear discriminant (Rosenbaum and Rubin, 1983), multivariate metrics such as the Mahalanobis metric (Rubin, 1973a), or combined metrics, such as methods that match on the Mahalanobis metric within propensity score calipers (Rosenbaum and Rubin, 1985). When matching on covariates defined by text data, care must be taken to define a metric that appropriately captures the complexities of text. For instance, linear distance metrics such as Euclidean distance may often fail to capture information about the relative importance of different covariates. To make this more clear, consider two pairs of documents containing the texts: “obama spoke”, “obama wrote” and ‘he spoke”, “he wrote”. Under a TDM-based representation, the Euclidean distances between units in each of these pairs are equal; however, the first pair of documents is intuitively more similar than the second, since the term “obama” contains more information about the content of the documents than the term “he”. Similarly, the Euclidean distance between the pair documents “obama spoke”, “obama obama” is equivalent to the distance between the pair “obama spoke”, “he wrote”, since by this metric distance increases linearly with differences in term frequencies. These issues also arise when using linear distance metrics with topic-model-based representations.

A metric that is less vulnerable to these complications is Mahalanobis distance, which defines the between documents and as , where

is the variance-covariance matrix of the covariates

. This is essentially a normalized Euclidean distance, which weights covariates according to their relative influence on the total variation across all documents in the corpus. Calculating Mahalanobis distance is practical for lower-dimensional representations, but because the matrix inversion does not scale well with the dimension of , it may not be computationally feasible for matching using larger, TDM-based representations.

An alternative metric, which can be efficiently computed using representations defined over thousands of covariates, is cosine distance. Cosine distance measures the cosine of the angle between two documents in a vector space:

Cosine distance is commonly used for determining text similarity in fields such as informational retrieval and is an appealing choice for matching because, irrespective of the dimension of the representation, it captures interpretable overall differences in covariate values (e.g., a cosine distance of one corresponds to a 90 degree angle between documents, suggesting no similarity and no shared vocabulary). In general, the utility of a particular continuous distance metric will largely depend on the distribution that is induced on the covariates through the representation.

a.3.3 Calipers and combinations of metrics

When pruning treated units is acceptable, exact and coarsened exact matching methods have the desirable property that the balance that will be achieved between matched samples is established a-priori. Treated units for whom there is at least one exact or coarsened exact match in the control group are matched, and all other treated units are dropped. On the other hand, matching with a continuous distance metric requires tuning after distances have been calculated in order to bound the balance between matched samples. After the distances between all possible pairings of treated and control documents have been calculated, one then chooses a caliper, , such that any pair of units and with distance cannot be matched. Here, when pruning treated units is acceptable, any treated units without at least one potential match are dropped. Calipers are typically specified according to a “rule of thumb” that asserts that be set equal to the value of 0.25 or 0.5 times the standard deviation of the distribution of distance values over all possible pairs of treated and control units, but in some special cases, the caliper can be chosen to reflect a more interpretable restriction. For example, using the cosine distance metric, one might choose a caliper to bound the maximum allowable angle between matched documents.

a.4 Text as covariates and outcomes

The procedure described in Section 3 is relatively straightforward to apply in studies where text enters the problem only through the covariates. However, in more complicated settings where both the covariates and one or more outcomes are defined by features of text, additional steps may be necessary to ensure these components are adequately separated.

In practice it is generally recommended that outcome data be removed from the dataset entirely before beginning the matching process to preclude even the appearance of “fishing,” whereby a researcher selects a matching procedure or a particular matched sample that leads to a desirable result (Rubin, 2007). However, this may not be possible when evaluating a text corpus, since both the covariates and outcome may often be latent features of the text (Egami et al., 2017). For instance, suppose we are interested in comparing the level of positive sentiment within articles based on the gender of the authors. One can imagine that news articles that report incidences of crime will typically reflect lower levels of positive sentiment than articles reporting on holiday activities, regardless of the gender of the reporter. Thus, we might like to match articles between male and female reporters based on their topical content and then compare matched samples on their systematic use of sentiment across a canonical set of topics. Here, we must extract both the set of covariates that will be used for matching (i.e., topical content) and the outcome (level of positive sentiment) from the same observed text. Because these different components may often be related, measuring both using the same data poses two important challenges for causal inference: first, it requires that the researcher use the observed data to posit a model on the “post-treatment” outcome, and, second, measurement of the covariates creates potential for fishing. In particular, suppose that positive sentiment is defined for each document as the number of times terms such as “happy” are used within that document (standardized by each document’s length). Suppose also that we use the entire vocabulary to measure covariate values for each document (e.g., using a statistical topic model). In this scenario, matching on topical content is likely to produce matches that have similar rates of usage of the term “happy” (in addition to having similar rates of usage of other terms), which may actually diminish our ability to detect differences in sentiment.

To address this issue, we recommend that researchers interested in inference in these settings define the covariates and outcome over a particular representation, or set of distinct representations, such that measurement of the outcome is unrelated to measurement of covariate values. This can often be accomplished using standard text pre-processing techniques. For example, one might measure the covariates using a representation of text defined over only nouns, and separately, measure outcome values using a representation defined over only adjectives. Or, continuing the previous example, one might divide the vocabulary into distinct subsets of terms, where one subset is used to measure topical content and the other is used to measure positive sentiment. In settings where the chosen representation of the text must be inferred from the observed data (e.g., topic-model-based representations), cross-validation techniques can also be employed, as described in Egami et al. (2017). For instance, one might randomly divide the corpus into training set and test set, where the training set is used to build a model for the representation, and this model is then applied to the test set to obtain covariate values that will be used in the matching procedure.

Appendix B Index of representations evaluated

Type Name Description Dimension
TDM T1 TF Bounded from 4-1000 10726
T2 TF-IDF Bounded from 4-1000 10726
T3 TF-IDF Bounded from 4-100 9413
T4 TF-IDF Bounded from 4-10 4879
T5 TF-IDF Bounded from 10-500 6000
T6 TF-IDF Bounded from 500-1000 154
T7 L2 Rescaled TF-IDF Bounded from 4-1000 10726
T8 TF on unbounded TDM 34397
T9 TF-IDF on unbounded TDM 34397
STM S1-10 STM on 10 Topics 10
S2-10 10 Topics + estimated sufficient reduction 11
S3-10 10 Topics, top 3 topics + focus 11
S1-30 30 Topics 30
S2-30 30 Topics + estimated sufficient reduction 31
S3-30 30 Topics, top 3 topics + focus 31
S1-50 50 Topics 50
S2-50 50 Topics + estimated sufficient reduction 51
S3-50 50 Topics, top 3 topics + focus 51
S1-100 100 Topics 100
S2-100 100 Topics + estimated sufficient reduction 101
S3-100 100 Topics, top 3 topics + focus 101
Word2Vec W1 Word embedding of dimension 50 (Google) 50
W2 Word embedding of dimension 100 (Google) 100
W3 Word embedding of dimension 200 (Google) 200
W4 Word embedding of dimension 300 (Google) 300
W5 Word embedding of dimension 300 (GloVe) 300
Table 2: Specification of the 26 representations considered

Appendix C Survey used in human evaluation experiment

The figures below show snapshots of different components of the survey, including the survey landing page, the scoring rubric presented to participants, and an example of one of the three training tasks.

Figure 6: In the first component of the survey, participants were informed about the nature of the task.
Figure 7: In the second component of the survey, participants were presented with a scoring rubric to use as a guide for determining the similarity of a pair of documents.
Figure 8: In the first training task of the survey, participants were ask to read and score a pair of articles and were then informed that the anticipated score for this pair was zero.

Appendix D Sensitivity of match quality scores to the population of respondents

To determine the generalizability of the match quality ratings obtained from our survey experiment, we compare two identical pilot surveys using respondents from two distinct populations. The first pilot survey was administered through Mechanical Turk, and the second pilot was administered through the Digital Laboratoary for the Social Sciences (Enos et al., 2016b). For each survey, respondents were asked to read and evaluate ten paired articles, including one attention check and one anchoring question. Each respondent was randomly assigned to evaluate eight matched pairs from a sample of 200, where this pilot sample was generated using the same weighted sampling scheme described above. The figure below shows the average match quality scores for each of the 200 matched pairs evaluated based on sample of 337 respondents from Mechanical Turk and 226 respondents from DLABSS. The large correlation between average matched quality scores from each sample (=0.88) suggests that our survey is a useful instrument for generating consistent average ratings of match quality across diverse populations of respondents. In particular, even though individual conceptions of match quality may differ across respondents, the average of these conceptions both appears to meaningfully separate the pairs of documents and to be stable across at least two different populations.

Figure 9: The strong linear relationship between the average match quality scores for 200 pairs of articles evaluated in two separate pilot studies (solid line) compared to a perfect fit (dotted line) suggests that the survey produces consistent results across samples, when averaged across multiple respondents.

Appendix E Technical details of measuring match quality and uncertainty estimation

e.1 Estimating match quality

Let denote a potential pairing of treatment and control documents, where is the index of the treated unit and is the index of the control unit. In our study, and . For matching procedure , let denote the set of matched pairs of articles identified using procedure . The set of all unique pairs selected by any of the procedures considered in the evaluation experiment, denoted , is defined by the union of these subsets:

The frequency of each pair of , indexed by , is:

where is an indicator variable taking value 1 if pair is identified using matching procedure and 0 otherwise.

To produce a representative sample of matched articles for evaluation, we take a weighted random sample of 500 pairs from , where the weights are roughly proportional to . In our study, since singleton pairs comprise over 75% of the pairs in , we further downweight pairs with by a factor of 5. Our overall sampling probabilities for pair are then

This scheme intentionally induces selection bias into the sample by discouraging rare pairs, especially singleton pairs, which are expected to be low quality with little variability, in favor of pairs that are identified by multiple matching procedures. Because these sampling weights are fixed a-priori, we can construct adjusted estimates of match quality, using classic survey sampling methods Sarndal et al. (2003) to remove this bias.

Let denote the subset of 500 pairs selected in this random sample, and define

as a binary random variable indicating that pair

was selected and is in the sample. Due to the very high weights for some pairs coupled with sampling without replacement, the probability of being sampled into the sample was not directly proportional to the . To then calculate the true sampling probabilities we simulated this sampling step 100,000 times, and calculated the chance of each weight being sampled to get adjusted, true, values for the adjustments described above.

For each element of (each matched pair in the sample), we observe some number of similarity ratings, where . We estimate the match quality for each pair using the average of observed ratings for that pair, . We then estimate the average match quality for matching procedure using a weighted average of match quality estimates across the pairs contained in , where weights for each pair are equal to the inverse probability of being sampled:

These estimates of procedure-level average match quality based on the sample of 500 pairs provide some insight about the relative performance of different procedures with respect to match quality. bHowever, estimating the average match quality for each procedure using only a small proportion of its pairs (i.e., only those pairs in ) may lead to considerable loss of precision. To address this issue, we developed our model for predicting the match quality of a pair of documents based on different machine measures of similarity, which we then use to construct estimators that are defined over the entire set of matched pairs . Specifically, we use a linear model trained on the pairs in to calculate the predicted match quality, for all . Aggregating these predictions thereby allows us to estimate match quality in a manner that captures information about the entire set of matched pairs identified by each procedure.

Uncertainty estimation

There are several sources of uncertainty in estimating the overall average quality of the different methods considered. Each matching method selected some number of pairs of documents as matches. For each method, we are interested in estimating the average quality of all the pairs selected by that method. Using classic survey sampling results we can estimate this average quality with the estimated quality of our sampled pairs.

However, due to our sampling strategy, which weighted pairs that were used by multiple methods more heavily, some of our considered methods have very few sampled pairs and so estimates of overall quality could be quite unreliable.We can improve the precision of our estimates using a version of model assisted estimation, which we describe next. Following this, we describe how we assess uncertainty of all of our estimates.

There are three different estimates of method quality that we consider. First, our survey sampling estimated quality for a method that selected pairs is:181818In this section we describe evaluating a single method and so do not include subscripts for method; we evaluated each method independently using those pairs in the set of evaluated pairs that were selected by that method.

with an indicator of whether pair was sampled for evaluation, with sampling probability , and the normalizing constant of .

To improve the precision of our estimates of quality for our methods we used a model-assisted survey sampling approach. We first fit a sparse regression model to the 520 evaluated pairs (500 from the matched pairs and 20 random pairs to give an overall baseline). We then, for each method, predicted the quality of each selected pair. We finally calculate the model adjusted quality of

Here is the predicted quality and is the measured quality (the average of those respondents who evaluated that pair. This is our second measure of method quality.

Finally, for methods that we did not initially identify for our human evaluation, we calculated a predicted quality based on our model of

Even if these methods used some pairs randomly selected for evaluation, we cannot use the survey adjusted since the pairs not in the sampling frame had no chance of selection. We do not split the sample into potentially sampled pairs and not to create a hybid estimator as that additional precision is unnecessary here.

Calculating Standard Errors.

We need uncertainty measures (standard errors) for our three different measures of quality. We build our strategy on the principles of a case-wise bootstrap with some modifications. In particular, especially for those methods with very few (2-3) sampled pairs, estimating the variability of quality of the pairs via case-wise bootstrap is unreliable unless we pool or partially pool estimates of variability across the different methods.

To see this consider a method with 4 pairs, 1 with very high weight due to being a rare pair and 3 with high weight due to being selected by most methods. Then any bootstrap sample including the high weight unit will essentially give an average quality score close to that of the high weight unit, which does not take into account the variability of scores we might see across other units of similar weight.

For the unadjusted quality scores we first calculated an estimate of the standard deviation of scores within a given match method. We did this by calculating the weighted standard deviation of scores for each of our methods, and then taking the median of these values. We use the median to avoid the impact of the extreme standard deviations due to the methods with small samples of pairs. Finally, we simulated the pair sampling step followed by the scoring of sampled pairs step by first selecting pairs using the original sampling strategy, and then generating pseudo-quality scores with our index standard deviation (all zero-centered). We then calculated pseudo-quality for each of our methods based on these scores. Our standard errors are then the standard deviation of the generated pseduo-quality scores.

By comparison, we also conducted a simple casewise bootstrap. Here we sampled the evaluated pairs with replacement and calculated each methods’ quality score using the bootstrap sample, finally obtaining standard errors using the standard deviation of the resulting values. Our parametric approach generally produced larger standard errors, which is a mixture of the overall conservatism of our approach and of the aforementioned issue of the naive approach giving small standard errors for some methods with few pairs when high-weight pairs dominate the overall quality measure leading to less variability in the estimates.We thus report our simulation-based standard errors.

For the model-adjusted and predictive approach we casewise bootstrapped the sample, re-fit the predictive model on the bootstrapped set (using the same regularization found in the initial estimation step), and then calculated final quality scores. For the model-adjusted case, we again worried about those methods with few samples having less variability due to small numbers of high weight units giving nearly the same model adjustment with each step. We therefore generated synthetic residuals to capture the impact of how variable the model adjustment could be by generating normally distributed noise with variance equal to the variance of the original residuals from our predictive model.These simulated residual-based standard errors were again conservative compared to the naive case-wise approach.

We note that the standard errors on the predictive model generated by looking at how varied across bootstrap samples were overly optimistic. These are standard errors for the true predicted quality of the method rather than the quality itself. Thus we use the simulated synthetic residuals for the unadjusted sample as a rough baseline for uncertainty. (We cannot assess true uncertainty due to lack of data supporting the representativeness of the predicted model quality scores for the pairs selected by the new methods.)

All our methods capture the uncertainty in the pair quality evaluation process as the variability of the pairs quality scores captures both the measurement error and the structural variation of the pairs themselves. In our plots, we generally report the simulation-based standard errors for the model adjusted estimates in order to be maximally conservative. As noted in the text, the model adjusted quality scores themselves were generally similar to unadjusted, and the differences between the two had no impact on our overall findings.

Appendix F Sensitivity analyses for template matching

To evaluate the robustness of our findings, we performed a series of sensitivity checks to assess how our results and subsequent conclusions may change when using different specifications of the matching procedure. Figure 10 shows the results produced by two of the alternative matching methods. These robustness checks highlight the importance of the specification of the matching procedure: weaker methods (i.e., methods that produce low quality matches) typically lead to weaker inferences. For example, the results produced from template matching using the Mahalanobis distance metric on a vector of 100 topic proportions show generally smaller changes in average favorability within each source before and after matching than the results shown in  4. In comparison, a procedure that selects matches at random for each news source shows effectively no differences in average favorability after matching. The null results in this case provide further evidence in support of the claim that text matching is an effective strategy for reducing differences in the observed biases across news sources that are due to topic selection.

Figure 10: Estimates of average favorability toward Democrats (blue) and Republicans (red) for each source both before and after matching using Mahalanobis matching on an STM with 100 topics (left) and using a random matching procedure (right).

As a final robustness check of the results based on our template-matched sample, we performed the following consistency test. First, we randomly generated 10,000 sets of matched documents containing 150 articles from each news source. In each iteration of random sampling and for each news source, we then calculated the average favorability scores towards Democrats and Republicans comparable to those in the template-matched sample. Finally, we calculated the average change in favorability (in absolute value) before and after matching across all 13 sources. Figure F shows the observed average change in favorability based on the template matching procedure described in Section 5

compared to the sampling distribution of this test statistic. This test suggests that template matching removes a significant amount of the component of bias that is due to differences in topic selection (

=0.004).