Fairness-Preserving Text Summarzation

As the amount of textual information grows rapidly, text summarization algorithms are increasingly being used to provide users a quick overview of the information content. Traditionally, summarization algorithms have been evaluated only based on how well they match human-written summaries (as measured by ROUGE scores). In this work, we propose to evaluate summarization algorithms from a completely new perspective. Considering that an extractive summarization algorithm selects a subset of the textual units in the input data for inclusion in the summary, we investigate whether this selection is fair or not. Specifically, if the data to be summarized come from (or cover) different socially salient groups (e.g., men or women, Caucasians or African-Americans), different political groups (Republicans or Democrats), or different news media sources, then we check whether the generated summaries fairly represent these different groups or sources. Our evaluation over several real-world datasets shows that existing summarization algorithms often represent the groups very differently compared to their distributions in the input data. More importantly, some groups are frequently under-represented in the generated summaries. To reduce such adverse impacts, we propose a novel fairness-preserving summarization algorithm 'FairSumm' which produces high-quality summaries while ensuring fairness. To our knowledge, this is the first attempt to produce fair summarization, and is likely to open up an interesting research direction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

10/22/2018

Beyond ROUGE Scores in Algorithmic Summarization: Creating Fairness-Preserving Textual Summaries

As the amount of textual information grows rapidly, text summarization a...
01/29/2021

Fairness for Whom? Understanding the Reader's Perception of Fairness in Text Summarization

With the surge in user-generated textual information, there has been a r...
07/15/2020

Dialect Diversity in Text Summarization on Twitter

Extractive summarization algorithms can be used on Twitter data to retur...
07/01/2018

Modeling, comprehending and summarizing textual content by graphs

Automatic Text Summarization strategies have been successfully employed ...
04/07/2022

MHMS: Multimodal Hierarchical Multimedia Summarization

Multimedia summarization with multimodal output can play an essential ro...
10/14/2018

Robust Neural Abstractive Summarization Systems and Evaluation against Adversarial Information

Sequence-to-sequence (seq2seq) neural models have been actively investig...
02/12/2018

Fair and Diverse DPP-based Data Summarization

Sampling methods that choose a subset of the data proportional to its di...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Recently, there has been an explosion in the amount of user-generated information on the Web. To help Web users deal with the information overload, text summarization algorithms are commonly used to get a quick overview of the textual information. Recognizing the business opportunities, many startups have mushroomed recently to offer content summarization services. For example, Agolo111https://www.agolo.com/splash provides a summarization platform to get most relevant information from both public and private documents. Aylien222https://aylien.com/text-api/summarization/ or Resoomer333https://resoomer.com/en/ present relevant points and topics from a piece of text. Multiple smartphone apps (e.g., News360, InShorts) have also been launched to provide short summaries of news stories.

A large number of text summarization algorithms have been devised, including algorithms to summarize a single large document, as well as for summarizing a set of documents (e.g., a set of microblogs or tweets); interested readers can check (Allahyari et al., 2017) for a survey on summarization algorithms. Most of these summarization algorithms are extractive in nature, i.e., they form the summary by extracting some of the textual units in the input (Gupta and Lehal, 2010) (e.g., individual sentences in a document, or individual tweets in a set of tweets). Additionally, few abstractive algorithms have also been devised, that attempt to generate natural language summaries (Allahyari et al., 2017). In this paper, we restrict our focus to more prevalent extractive summarization.

Extractive summarization algorithms essentially perform a selection of a (small) subset of the textual units in the input, for inclusion in the summary, based on some measure of the relative quality or importance of the textual units. Traditionally, these algorithms are judged based on how closely the algorithmic summary matches gold standard summaries that are usually written by human annotators. To this end, measures such as ROUGE scores are used to evaluate the goodness of algorithmic summaries (Lin, 2004). The underlying assumption behind this traditional evaluation criteria is that the data to be summarized is homogeneous, and the sole focus of summarization algorithms should be to identify summary-worthy information.

However, user-generated content constitutes a large chunk of information generated on the Web today, and such content comes from users belonging to different social groups. For example, on social media, different user groups (e.g., men and women, Republicans and Democrats) discuss socio-political issues, and it has been observed that different groups express very different opinions on the same topic or event (Chakraborty et al., 2017). Hence, while summarizing such heterogeneous user-generated data, one needs to check whether the summaries are properly representing the opinions of these different social groups. Since the textual units (e.g., tweets) that are included in the summary get much more exposure than the rest of the information (similar to how top-ranked search results get much more exposure than other documents (Zehlike et al., 2017; Biega et al., 2018)), if a particular group is under-represented in the summary, their opinion will get much less exposure than the opinion of other groups.

Therefore, in this paper, we propose to look at summarization algorithms from a completely new perspective, and investigate whether the selection of the textual units in the summary is fair, i.e., whether the generated summary fairly represents every social group in the input data. We experiment over three datasets of tweets generated by different user groups (men and women, pro-republican and pro-democratic users). We find that most existing summarization algorithms do not fairly represent different groups in the generated summaries, even though the tweets written by these groups are of comparable textual quality. More worryingly, some groups are found to be systemically under-represented in the process. We, by no means, argue that such under representation is caused by the existing algorithms explicitly, rather it is an inadvertent perpetuation of the metrics that the algorithm is trying to optimise.

To reduce such unfairness, we develop two novel fairness-preserving summarization algorithms which select highly relevant textual units in the summary while maintaining fairness in the process. One of our algorithms is based on constrained sub-modular optimization (where the fairness criteria are applied as constraints), and the other is based on fair ranking of textual units based on some goodness measure. Extensive evaluations show that our proposed algorithms are able to generate fair summaries having quality comparable to state-of-the-art summarization algorithms (which often do not generate fair summaries), while being fair to different user groups.

In summary, we make the following contributions in this paper: (1) ours is one of the first attempts to consider the notion of fairness in summarization, and the first work on fair summarization of textual information. (2) we show that existing summarization algorithms often do not fairly represent the input data while summarizing content generated by different user groups, and (3) we propose two summarization algorithms that produce good quality as well as fair summaries. The algorithms can accommodate different fairness notions, including equal representation, proportional representation, and so on. We plan to make the implementation of our proposed fair summarization algorithms publicly available upon acceptance of the paper.

We believe that this work will be an important addition to the growing literature on incorporating fairness in algorithmic systems. Generation of fair summaries would not only benefit the end users of the summaries, but also many downstream applications that use the summaries of crowdsourced information, e.g., summary-based opinion classification and rating inference systems (Lloret et al., 2010).

2. Related Work

In this section, we discuss three strands of prior works that are relevant to this paper. First, we focus on textual summarization. Then we draw our attention to potential causes and cases of bias in applications on user-generated content. Finally, we relate the present work to prior works on algorithmic fairness.

Text Summarization: A large number of text summarization algorithms have been proposed in literature; the reader can refer to (Gupta and Lehal, 2010; Allahyari et al., 2017)

for surveys. While most classical summarization algorithms were unsupervised, the recent years have seen the proliferation of many supervised neural network-based models for summarization; the reader can refer to 

(Dong, 2018) for a survey on neural summarization models. One of the most commonly used class of summarization algorithms is centered around the popular TF-IDF model (Salton, 1989). Different works have used TF-IDF based similarities for summarization (Radev et al., 2002; Alguliev et al., 2011). Additionally, there has been a series of works where summarization has been treated as a sub-modular optimization problem (Lin and Bilmes, 2011; Badanidiyuru et al., 2014). Algorithm I, proposed in this work, is also based on a sub-modular constrained optimization framework, and uses the notion of TF-IDF similarity .

Bias in applications on user-generated content:

Powerful computational resources along with the enormous amount of data from social media sites has driven a growing school of works that uses a combination of machine learning, natural language processing, statistics and network science for decision making. In 

(Baeza-Yates, 2018), Baeza-Yates has discussed how human perceptions and societal biases creep into social media, and how different algorithms fortifies them. These observations raise question of bias in the decisions derived from such analysis. Friedman et al. broadly categorized these biases into 3 different classes, and essentially were the first to propose a framework for comprehensive understanding of the biases (Friedman and Nissenbaum, 1996). Several recent works have furthermore investigated different biases (demographic, ranking, position biases etc.) and their effects on online social media (Bonchi et al., 2017; Chakraborty et al., 2017; Dupret and Piwowarski, 2008). Our observations in the present work show that, summaries generated by existing algorithms (which do not consider fairness) can lead to biases towards/against socially salient demographic groups.

Fairness in information filtering algorithms: Given that information filtering algorithms (search, recommendation, summarization algorithms) have far-reaching social and economic consequences in today’s world, fairness and anti-discrimination have been recent inclusions in the algorithm design perspective (Hajian et al., 2016; Dwork et al., 2012).

There have been several recent works on defining and achieveing different notions of fairness (Pedreshi et al., 2008; Hardt et al., 2016; Zafar et al., 2017; Zemel et al., 2013) as well as on removing the existing unfairness from different methodologies(Hajian et al., 2014; Zemel et al., 2013). Different fairness-aware algorithms have been proposed to achieve group and/or individual fairness for tasks such as clustering (Chierichetti et al., 2017), classification (Zafar et al., 2017), ranking (Zehlike et al., 2017; Biega et al., 2018; Singh and Joachims, 2018; Wu et al., 2018), recommendation (Chakraborty et al., 2019; Burke et al., 2018) and sampling (Celis et al., 2016).

To the best of our knowledge, there has been only one prior work on fairness in summarization – Celis et al. recently proposed a methodology to obtain fair and diverse summaries (Celis et al., 2018). However, the authors apply their determinantal point process based algorithm on an image dataset and a categorical dataset (having several attributes), while ours is the first work on fair summarization of textual data.444A preliminary version of the present work has been published as a poster, earlier than (Celis et al., 2018) – citation is not included to honor the double-blind review policy.

3. Datasets Used

Since our focus in this paper is to understand the need for considering fairness while summarizing user-generated content, we consider datasets containing tweets posted by different groups of users, e.g., different gender groups – men and women, or groups of users with different political leanings. Specifically, we use the following three datasets throughout this paper.

(1) Claritin dataset: It has been found that users undergoing medication often post in social media, the consequences of using different drugs, especially highlighting the side-effects they endure (O‘Connor et al., 2014). Claritin (loratadine) is an anti-alergic drug that reduces the effects of natural chemical histamine in the body, which can produce symptoms of sneezing, itching, watery eyes and runny nose. However, this drug also have some adverse effects on the patients.

To understand the sentiments of people towards claritin and different side-effects caused by it, tweets posted by users about Claritin were collected, analyzed and later publicly released by ‘Figure Eight’ (erstwhile CrowdFlower). This dataset contains tweets in English about the effects of the drug. Each tweet is annotated with the gender of the user (male/female/unknown) posting it (claritin-dataset, 2013). Initial analyses on these tweets reveal that women mentioned more serious side effects of the drug (e.g., heart palpitations, shortness of breathe, headaches) while men did not (claritin-dataset, 2013). From this dataset, we ignored those tweets for which the gender of the user is unknown. Finally, we have tweets in total, of which () are written by men, and () by women.

(2) US-Election dataset: This dataset, provided by Darwish et al. (Darwish et al., 2017), contains English tweets posted during the 2016 US Presidential election. Each tweet is annotated as supporting or attacking one of the presidential candidates (Donald Trump and Hillary Clinton) or neutral or attacking both.

For simplicity, we grouped the tweets into three classes: (i) Pro-Republican: tweets which support Trump and / or attack Clinton, (ii) Pro-Democratic: tweets which support Clinton and / or attack Trump, and (iii) Neutral: tweets which are neutral or attack both candidates. After removing duplicates, we have tweets, out of which () are Pro-Republican, () tweets are Pro-Democratic, and remaining () are Neutral tweets.

(3) MeToo dataset: We collected a set of tweets related to the #MeToo movement in October 2018. We initially collected English tweets containing the hashtag #MeToo using the Twitter Search API (Sea, 2019). After removing duplicates, we were left with distinct tweets. We asked three human annotators to examine the name and bio of the Twitter accounts who posted the tweets. The annotators observed three classes of tweets based on who posted the tweets – tweets posted by male users, tweets posted by female users, and tweets posted by organizations (mainly news media agencies). Also, there were many tweets for which the annotators could not understand the type/gender of the user posting the tweet. For purpose of this study, we decided to focus only on those tweets for which all the annotators were certain that they were written by men or women. In total, we had such tweets, out of which are written by men and are written by women.

In summary, two of our datasets contain tweets posted by two social groups (men and women) which the other dataset contains three categories of tweets (pro-democratic, pro-republican and neutral tweets, presumably written by users having the corresponding political leaning).

Human-generated summaries for evaluation: The traditional way of evaluating the ‘goodness’ of a summary is to match it with one or more human-generated summaries (gold standard), and then compute ROUGE scores (Lin, 2004). ROUGE scores are between , where a higher ROUGE score means a better algorithmic summary that has higher levels of ‘similarity’ with the gold standard summaries. Specifically, the similarity is computed in terms of common unigrams (in case of ROUGE-1) or common bigrams (in case of ROUGE-2) between the algorithmic summary and the human-generated summaries. For creating the gold standard summaries, we asked three human annotators to summarize the datasets. Each annotator is well-versed with the use of social media like Twitter, is fluent in English, and none is an author of this paper. The annotators were asked to generate extractive summaries independently, i.e., without consulting one another. We use these three human-generated summaries for the evaluation of algorithmically-generated summaries, by computing the average ROUGE-1 and ROUGE-2 Recall and scores (Lin, 2004).

4. Why do we need fair summaries?

Traditionally, summarization algorithms have only considered including (in the summary) those textual units (tweets, in our case) whose contents are most ‘summary-worthy’. In contrast, in this paper, we argue for giving a fair chance to textual units written by different social groups to appear in the summary. Before making this argument, two questions need to be investigated –
(1) Are the tweets written by different social groups of comparable textual quality? If not, someone may argue for discarding lower quality tweets generated by a specific user group.
(2) Do the tweets written by different social groups actually reflect different opinions? This question is important since, if the opinions of the different groups are not different, then it can be argued that selecting tweets of any group (for inclusion in the summary) is sufficient.
We attempt to answer these two questions in this section.

4.1. Are tweets written by different social groups of comparable quality?

We use three measures for estimating the textual quality of individual tweets. (i) First, the NAVA words (nouns, adjectives, verbs, adverbs) are known to be the most informative words in an English text 

(Miller, 1995). Hence we consider the count of NAVA words in a tweet as a measure of its textual quality. We consider two other measures of textual quality that are specific to the application of text summarization – (ii) ROUGE-1 precision and (iii) ROUGE-2 precision scores. Put simply, the ROUGE-1 (ROUGE-2) precision score of a tweet measures what fraction of the unigrams (bigrams) in the tweet appears in the gold standard summaries for the corresponding dataset (as described in Section 3). Thus, these scores specifically measure the utility of selecting a particular tweet for inclusion in the summary.

For a particular dataset, we compare the distributions of the three scores – ROUGE-1 precision score, ROUGE-2 precision score, and count of NAVA words – for the subsets of tweets written by different user groups. For all cases, we found that ROUGE-1 precision scores and ROUGE-2 precision scores show similar trends; hence we report only the ROUGE-2 precision scores. Figure 1(a) and Figure 1(b) respectively compare the distributions of ROUGE-2 precision scores and NAVA word counts among the tweets written by male and female users in the MeToo dataset. We find that the distributions are very close to each other, thus implying that the tweets written by both groups are of comparable textual quality. Similarly, Figure 2 shows that, in the US-Election dataset, the pro-democratic, pro-republican and neutral tweets are of comparable textual quality. The textual quality of the tweets written by male and female users in the Claritin dataset are also very similar – the mean number of NAVA words are and respectively for tweets written by male and female users, while the mean ROUGE-2 Precision scores are for male and for female (detailed results omitted for brevity). All these values show that the textual quality is very similar for the different groups of tweets, across all the three datasets.

(a) ROUGE-2 Precision
(b) NAVA word count
Figure 1. Comparing textual quality of individual tweets of the two user groups in MeToo dataset – distributions of (a) ROUGE-2 Precision scores and (b) Count of NAVA words, of individual tweets.
(a) ROUGE-2 Precision
(b) NAVA word count
Figure 2. Comparing textual quality of individual tweets of the three groups in US-Election dataset – distributions of (a) ROUGE-2 Precision scores and (b) Count of NAVA words.
Tweets on #MeToo from male users Tweets on #MeToo from female users
If a woman shares a #metoo without evidence, it‘s taken to be true coz it‘s a women‘s testimony, a man coming out with #HeToo story, people would be doubtful, & question the evidences, the intent & will never except the man as victim. #misandry must be understood. #SpeakUpMan If a woman is unveiled it gives a man the right 2 demand sexual favors.When it comes 2 sexual harassment in Islamic Republic it is always your fault if U dont wear hijab. Women using camera to expose sexual harassment. #MyCameraIsMyWeapon is like #MeToo movement in Iran
Instead of arresting this women @CPMumbaiPolice taking common man coz its #MeToo #MeTooIndia #MeToo4Publicity This is why #FeminismIsCancer #feminismIsMisandry #CrimeByWomen Whatever happens to you in your life, you always have the choice to rise above your challenges. Choose NOT to be a victim. #feminism #metoo
Pain knows no gender. When it hurts, it hurts equally, whether its a man or woman. Why there is discrimination on Gender. Every person deserves dignified treatment and happy life. #MeToo #MeToo4Publicity ONLY 40 charges and thousands of cries for help. Too many are victim to #UberRape and their voices aren‘t being heard. #TimesUp #Metoo
When Settlement amount is the motive by falsely charging a man’ it’s called #MeToo Pls tk action on ppl filing #FakeCases & bring #GenderNeutralLaws #MeToo4publicity #MensCommission. A long term solution would be the exact opposite of the two suggested here - gender sensitisation, not segregation so that exchange between different genders is normalised instead of being stigmatised further. #MeToo
Table 1. Example tweets containing the hashtags that are most frequently posted by male and female users, in the MeToo dataset. Even though all tweets have high textual quality, the opinions expressed are quite diverse.

4.2. Do tweets written by different user groups reflect different opinion?

To answer this question, we asked our human annotators (those who prepared the gold standard summaries) to observe the tweets written by different user groups in the datasets. For all three datasets, the annotators observed that tweets posted by different social groups mostly contain very different information/opinion.

For instance, Table 1 shows some sample tweets written by male and female users in the MeToo dataset, along with some of the hashtags that are frequently posted by male and female users (highlighted). We observe that most tweets written by women support the #MeToo movement, and give examples of relevant experiences of themselves or of other women. On the other hand, many of the tweets written by male users point out undesirable side-effects of the movement, and call for gender equality.

Similarly, in the US-Election dataset, the pro-republican tweets criticize Hillary Clinton and/or support the policies of Donald Trump (e.g., ‘We must not let #CrookedHillary take her criminal scheme into the Oval Office. #DrainTheSwamp’), while the pro-democratic tweets have the opposite opinion (e.g. ‘Yes America. This is the election where Hillary’s cough gets more furious coverage than Trump asking people to shoot her #InterrogateTrump’). The neutral tweets either given only information (and no opinion), or criticize both Clinton and Trump. For the Claritin dataset as well, there is large difference in opinion among the tweets written by male and female users, where the female users criticizing the drug much more than the male users (details omitted for brevity). Thus, it is clear that tweets posted by different social groups often reflect very different opinions.

4.3. Need for fairness in summarization

The fact that tweets written by different social groups are of very similar quality/merit implies that all groups should have ‘equality of opportunity’ (Roemer, 2009) for their opinions to be reflected in the summary. This fact, coupled with the diversity in opinion of the different groups, calls for a fair representation of the opinions of different groups in the summary. This is similar in spirit to the need for fairness in top crowdsourced recommendations (Chakraborty et al., 2019) or top search results (Biega et al., 2018). Since the tweets that get included in the summary are likely to get much more exposure than the rest of the information (just like how top search and recommendation results get much more exposure (Chakraborty et al., 2019; Biega et al., 2018)), under-representation of any of the social groups in the summary can severely suppress their opinion. These factors advocate the need for fair summaries when data generated by various social groups is being summarized.

5. Notions of Fair Summarization

Having established the need for fair summarization, we now define two fairness notions that are applicable in the context of summarization. Essentially, when the input data (e.g., tweets) are generated by users belonging to different social groups, we require the summaries to fairly represent these groups. Next, we consider two notions for fairness in representation.

5.1. Equal Representation

The notion of equality finds its roots in the field of morality and justice, which advocates for the redress of undeserved inequalities (e.g., inequalities of birth or due to natural endowment)  (Rawls, 2009). Formal equality suggests that when two people or two groups of people have equal status in at least one normatively relevant aspect, they must be treated equally  (Gosepath, 2011). In terms of selection, equal representation requires that the number of representatives from different classes in the society having comparable relevance has to be equal.

In the context of user-generated content, we observed that different sections of the society have different opinion on the same topic, either because of their gender or ideological leaning (Babaei et al., 2018). However, if we consider the textual quality i.e., their candidature for inclusion in the summary, then tweets from both the groups are comparable (as discussed in section 4). Thus, the notion of equal representation requires that a summarization algorithm will be fair if different groups generating the input data are represented equally in the output summary. Given the usefulness of summaries in many downstream applications, this notion of fairness ensures equal exposure to the opinions of different socially salient groups.

5.2. Proportional Representation

Often it may not be possible to equally represent different user groups in the summary, especially if the input data contains very different proportions from different groups. Hence, we consider another notion of fairness: Proportional Representation (also known as statistical parity (Luong et al., 2011)). Proportional representation requires that the representation of different groups in the selected set should be proportional to their distribution in the input data.

In certain scenarios such as hiring for jobs, relaxations of this notion are often used. For instance, the U.S. Equal Employment Opportunity Commission uses a variation of Proportional Representation to determine whether a company’s hiring policy is biased against (has any adverse impact on) a demographic group (Biddle, 2006). According to this policy, a particular class is under-represented in the selected set (or adversely impacted), if the fraction of selected people belonging to class is less than of the fraction of selected people from the class having the highest selection rate.

In the context of summarization, Proportional Representation requires that the proportion of content from different user groups in the summary should be same as in the original input. A relaxed notion of proportional fairness is one which would ensure no adverse impact in the generated summary. In other words, no adverse impact requires that the fraction of textual units from any class, that is selected in the summary, should not be less than

of the fraction of selected units from the class having the highest selection rate (in the summary). These notions of fairness ensure that the probability of selecting an item is independent of which user group generated it.

It should be noted that, we are not advocating for any particular notion of fairness to be better in the context of summarization. We also note that different applications may require different types of fairness. Hence, in this work, we propose mechanisms that can accommodate different notions of fairness, including the ones stated above, and produce fair summaries accordingly.

6. Do existing algorithms produce fair summaries?

Having discussed the need for fair summarization, we now check whether existing algorithms generate fair summaries.

6.1. Summarization algorithms

We consider a set of well-known extractive summarization algorithms, that select a subset of the textual units (tweets) for inclusion in the summary. Some of the methods are unsupervised (the traditional methods) and some are recent supervised neural models.

Unsupervised summarization algorithms: We consider six well-known summarization algorithms. These algorithms generally estimate an importance score for each textual unit (sentence / tweet) in the input, and textual units having the highest importance scores are selected to generate a summary of length .
(1) Cluster-rank (Garg et al., 2009) which clusters the textual units to form a cluster-graph, and uses graph algorithms (e.g., PageRank) to compute the importance of each unit.
(2) DSDR (He et al., 2012) which measures the relationship between the textual units using linear combinations and reconstructions, and generates the summary by minimizing the reconstruction error.
(3) LexRank (Erkan and Radev, 2004)

, which computes the importance of textual units using eigenvector centrality on a graph representation based on similarity of the units, where edges are placed depending on the intra-unit cosine similarity.


(4) LSA (Gong and Liu, 2001)

, which constructs a terms-by-units matrix, and estimates the importance of the textual units based on Singular Value Decomposition on the matrix.


(5) LUHN (Luhn, 1958), which derives a ‘significance factor’ for each textual unit based on occurrences and placements of frequent words within the unit.
(6) SumBasic (Nenkova and Vanderwende, 2005), which uses frequency-based selection of textual units, and reweights word probabilities to minimize redundancy.

Supervised neural summarization algorithms: With the recently increasing popularity of neural network based models, the state of the art techniques for summarization have shifted to data-driven supervised algorithms (Dong, 2018). We have considered two recently proposed extractive neural summarization models, proposed in (Nallapati et al., 2017):

(7) SummaRuNNer-RNN, a Recurrent Neural Network based sequence model that provides a binary label to each textual unit – a label of

implies that the unit can be part of the summary. Each label has an associated confidence score. The summary is generated by picking textual units labeled in decreasing order of their confidence score.

(8) SummaRuNNer-CNN is a variant of the above model where the sentences are fed to a two layer Convolutional Neural Network (CNN) architecture before using GRU-RNN in the third layer.

For both the SummaRuNNer models, the authors have made the pretrained models available555https://github.com/hpzhao/SummaRuNNer which are trained on the CNN/Daily Mail news articles corpus666https://github.com/deepmind/rc-data. We directly used the pretrained models for the summarization.

6.2. Verifying if the summaries are fair

We applied the summarization algorithms stated above on the datasets described in Section 3, to obtain summaries of length tweets each. Table 2 shows the results of summarizing the Claritin dataset, while Table 3 and Table 4 show the results for the US-Election and MeToo datasets respectively. In all cases, shown are the numbers of tweets of the different classes in the whole dataset (first row), and in the summaries generated by the different summarization algorithms (subsequent rows), and the average ROUGE-1 and ROUGE-2 recall and scores of the summaries.

We check whether the generated summaries are fair, according to the fairness notions of equal representation, proportional representation and the principle of ‘no adverse impact’ (Biddle, 2006) (which were explained in Section 5). We find under-representation of particular groups of users in the summaries generated by many of the algorithms; these cases are marked in Table 2, Table 3 and Table 4 with the symbols (where equal representation is violated), (where proportional representation is violated) and # (cases where there is adverse impact). Especially, the minority groups are under-represented in most of the cases.

Method Nos. of tweets ROUGE-1 ROUGE-2
Female Male Recall Recall
Whole data 2,505 1,532 NA NA NA NA
(62%) (38%)
ClusterRank 33 0.437 0.495 0.161 0.183
DSDR 31 0.302 0.425 0.144 0.203
LexRank 34 0.296 0.393 0.114 0.160
LSA 35 0.515 0.504 0.151 0.147
LUHN 34 0.380 0.405 0.128 0.136
SumBasic 0.314 0.434 0.108 0.149
SummaRNN 33 0.342 0.375 0.126 0.147
SummaCNN 0.377 0.409 0.126 0.146
Table 2. Results of summarizing the Claritin dataset: Number of tweets posted by the two user groups, in the whole dataset and in summaries of length tweets generated by different algorithms. Also given are ROUGE-1 and ROUGE-2 Recall and scores of each summary. The symbols , and # respectively indicate under-representation of a group according to the fairness notions of equal representation, proportional representation, and ‘adverse impact’ (Biddle, 2006).
Method Nos. of tweets ROUGE-1 ROUGE-2
Pro Rep Pro Dem Neutral Recall Recall
Whole data 1,309 658 153 NA NA NA NA
(62%) (31%) (7%)
ClusterRank 32 0.247 0.349 0.061 0.086
DSDR 19 0.215 0.331 0.067 0.104
LexRank 20 0.252 0.367 0.078 0.114
LSA 20# 0.311 0.404 0.083 0.108
LUHN 34 0.281 0.375 0.085 0.113
SumBasic 23 0.200 0.311 0.051 0.080
SummaRNN 34 0.347 0.436 0.120 0.160
SummaCNN 32 17 0.337 0.423 0.108 0.145
Table 3. Results of summarizing the US-Election dataset: Number of tweets of the three groups in the whole data and summaries of length tweets generated by different algorithms. The symbols , and # denote under-representation of the corresponding group, similar to Table 2.
Method Nos. of tweets ROUGE-1 ROUGE-2
Female Male Recall Recall
Whole data 275 213 NA NA NA NA
(56.3%) (43.7%)
ClusterRank 26 0.550 0.560 0.216 0.223
DSDR 32 0.233 0.358 0.092 0.141
LexRank 34 0.285 0.414 0.105 0.153
LSA 30 0.511 0.534 0.175 0.183
LUHN 28 0.520 0.522 0.219 0.184
SumBasic 0.464 0.499 0.216 0.229
SummaRNN 27 0.622 0.636 0.385 0.394
SummaCNN 27 0.622 0.636 0.385 0.394
Table 4. Results of summarizing the MeToo dataset: Number of tweets of the two classes, in the whole dataset and in summaries of length tweets generated by different algorithms. The symbols , and # denote under-representation of the corresponding group, similar to Table 2.

We repeated the experiments for summaries of lengths other than as well, such as for (details omitted due to lack of space). We observed several cases where the same algorithm includes very different proportions of tweets of various groups, while generating summaries of different lengths.

Thus, there is no guarantee of fairness in the summaries generated by the existing summarization algorithms – one or more groups are often under-represented in the summaries, even though the quality of the textual units (tweets) written by different groups are quite similar (as was shown in Section 4). Having established the need for fairness-preserving summarization algorithms, we now proceed to propose two such algorithms in the next section.

7. Proposed Algorithms for Fair Summarization

This section describes our proposed algorithms for fair summarization. We describe two algorithms - the first one based on constraint optimization, and the second one based on fair ranking.

7.1. Algorithm I for fair summarization: FairSumm

This algorithm, named FairSumm, treats summarization as a constrained optimization problem of an objective function. The objective function is designed so that optimizing it is likely to result in a good quality summary, while the fairness requirements are applied as constraints which must be obeyed during the optimization process. We now describe the algorithm.

Some notations: Let denote the set of textual units (e.g., tweets) that is to be summarized. Our goal is to find a subset ( ) such that , where (an integer) is the desired length of the summary (specified as an input),

7.1.1. Formulating summarization as an optimization problem:

We need an objective function for the task of extractive summarization, optimizing which is likely to lead to a good summary. Following the formulation by Lin et al. (Lin and Bilmes, 2011), we consider two important aspects of an extractive text summarization algorithm, viz. Coverage and Diversity reward, described as follows.

Coverage: Coverage refers to amount of information covered in the summary . Clearly, the summary cannot contain the information in all the textual units. We consider the summary to cover the information contained in a particular textual unit if either contains , or if contains another textual unit that is very similar to . Here we assume a notion of similarity between two textual units and , which can be measured in various ways. Thus, the coverage will be measured by a function – say, – whose generic form can be

(1)

Thus, measures the overall similarity of the textual units included in the summary with all the textual units in the input collection .

Diversity reward: The purpose of this aspect is to avoid redundancy and reward diverse information in the summary. Usually, it is seen that the input set of textual units can be partitioned into groups, where each group contains textual units that are very similar to each other. A popular way of ensuring diversity in a summary is to partition the input set into such groups, and then select a representative element from each group (Dutta et al., 2018).

Specifically, let us consider that the set of textual units is partitioned into groups. Let comprise a partition of . That is, ( is formed by the union of all ) and = (, have no element in common) for all . For instance, the partitioning can be achieved by clustering the set using any clustering algorithm (e.g., -means), based on the similarity of items as measured by .

Then, to reduce redundancy and increase diversity in the summary, including textual units from different partitions needs to be rewarded. Let the associated function for diversity reward be denoted as . A generic formulation of is

(2)

where is a suitable function that estimates the importance of adding the textual unit to the summary. The function is called a ‘singleton reward function’ since it estimates the reward of adding the singleton element to the summary . One possible way to define this function is:

(3)

which measures the average similarity of to the other textual units in .

Justifying the functional forms of Coverage and Diversity Reward: We now explain the significance of the functional form of in Equation 1 and in Equation 2. We give only an intuitive explanation here; more mathematical details are given in the Supplementary Information accompanying the paper.

The functions and are designed to be ‘monotonic non-decreasing submodular’ functions (or ‘monotone submodular’ functions), since such functions are easier to optimize. A monotonic non-decreasing function is the one that does not decrease as the set (over which the function is employed) grows. A submodular function has the property of diminishing returns which intuitively means that as the set (over which the function is employed) grows, the increment of the function decreases.

is monotone submodular. is monotonic since coverage increases by the addition of a new sentence in the summary. At the same time, is submodular since the increase in would be more when a sentence is added to a shorter summary, than when a sentence is added to a longer summary.

Also is a monotone submodular function. The diversity of a summary increases considerably only for the initial growth of the set (when new, ‘novel’ elements are added to the summary) and stabilizes later on, and thus prevents the incorporation of similar elements (redundancy) in the summary. rewards diversity since there is more benefit in selecting a textual unit from a partition (cluster) that does not yet have any of its elements included in the summary. As soon as any one element from a cluster is included in the summary, the other elements in start having diminishing gains, due to the square root function in Equation 2.

Combining Coverage and Diversity reward: While constructing a summary, both coverage and diversity are important. Only maximizing coverage may lead to lack of diversity in the resulting summary and vice versa. So, we define our objective function for summarization as follows:

(4)

where , 0 are the weights given to coverage and diversity respectively.

Our proposed fairness-preserving summarization algorithm will maximize in keeping with some fairness constraints. Note that is monotone submodular since it is a non-negative linear combination of two monotone submodular functions and . We have chosen such that it is monotone submodular, since there exist standard algorithms to efficiently optimize such functions (as explained later in the section).

7.1.2. Proposed fair summarization scheme:

Our proposed scheme is based on the concept of matriods that are typically used to generalize the notion of liner independence in matrices (Oxley, 2006). Specifically, we utilize a special type of matroids, called partition matroids. We give here a brief, intuitive description of our method. More details can be found in the Supplementary Information.

Brief background on matroids and related topics: Mathematically speaking, a matroid is a pair = (, ), defined over a finite set (called the ground set) and a family of sets (called the independent sets), that satisfies the three properties:

  1. (empty set) .

  2. If and , then .

  3. If , and , then there exists such that .

Condition (1) simply means that can contain the empty set, i.e., the empty set is independent. Condition (2) means that every subset of an independent set is also independent. Condition (3) means that if is independent and there exists a larger independent set , can be extended to a larger independent set by adding an element in but not in 777http://www-math.mit.edu/~goemans/18433S09/matroid-notes.pdf.

Partition matroids refer to a special type of matroids where the ground set is partitioned into disjoint subsets , , …, for some , and = { and , for all = 1, 2, …, } for some given parameters , , …, . Thus, is a subset of that contains at least items from the partition (for all ), and is the family of all such subsets.

Consider that we have a set of control variables (e.g., ‘gender’, ‘political leaning’). Each item in has a particular value for each . Also consider that takes distinct values, e.g., the control variable ‘gender’ takes the two distinct values ‘male’ and ‘female’, while the control variable ‘political leaning’ takes the values ‘Democrat’, ‘Republican’ and ‘Neutral’.

For each control variable , we can partition into disjoint subsets , , …, , each corresponding to a particular value of this control variable. We now define a partition matriod = (, ) such that

= { and , for all }

for some given parameters , , …, .

Now, for a given submodular objective function , a submodular optimization under the partition matriod constraints with control variables can be designed as follows:

(5)

subject to .

A prior work by Du et al. (Du et al., 2013) has established that this submodular optimization problem under the matroid constraints can be solved efficiently with provable guarantees (see (Du et al., 2013) for details).

Formulating the fair summarization problem: In the context of the fair summarization problem, the ground set is (= ), the set of all textual units (sentences/tweets) which we look to summarize. The control variables are analogous to the sensitive attributes with respect to which fairness is to be ensured, such as ‘gender’ or ‘political leaning’. In this work, we consider only one sensitive attribute for a particular dataset (the gender of a user for the Claritin and MeToo datasets, and political leaning for the US-Election dataset). Let the corresponding control variable be , and let take distinct values (e.g., for the Claritin and MeToo datasets, and for the US-Election dataset). Note that, the formulation can be extended to multiple sensitive attributes (control variables) as well.

Each textual unit in is associated with a class, i.e., a particular value of the control variable (e.g., is posted either my a male or a female). Let , , …, ( , for all ) be the disjoint subsets of the textual units from the classes, each associated with a distinct value of . We now define a partition matroid = (, ) in which is partitioned into disjoint subsets , , …, and

= { and , = 1, 2, …, }

for some given parameters , , …, . In other words, will contain all the sets containing at most textual units from , = 1, 2, …, .

Now we add the fairness constraints. Outside the purview of the matroid constraints, we maintain the restriction that ’s are chosen such that
(1) (the desired length of the summary ), and
(2) a desired fairness criterion is maintained in . For instance, if equal representation of all classes in the summary is desired, then for all .

We now express our fairness-constrained summarization problem as follows:

(6)

subject to .

where the objective function is as stated in Equation 4. Given that is a submodular function (as explained earlier in this section), the algorithm proposed by Du et al. (Du et al., 2013) is suitable to solve this constrained submodular optimization problem.

An example: Let us illustrate the formulation of the fair summarization problem with an examples. Assume that we are applying the equal representation fairness notion over the MeToo dataset, and we want a summary of length tweets. Then, the control variable corresponds to the sensitive attribute ‘gender’ which takes values (‘male’ and ‘female’) for this particular dataset. The set of tweets will be partitioned into two disjoint subsets and which will comprise of the tweets posted by male and female users respectively. To enforce equal representation fairness constraint, we will set the parameters and (since we want equal number of tweets from and in the summary). Thus, contains all the possible sets that contain at most tweets written by male users and tweets written by female users. Each such is a valid summary (that satisfies the fairness constraints). Solving the optimization problem in Equation 6 will give us that summary for which will be maximum, i.e. for which coverage and diversity reward will be the highest.

1:Set = .
2:Set = for = 0, , where = [ ], and = 0.
3:Set =
4:for  = 0, 1, , ,  do
5:     for each and  do
6:         if () - ()  then
7:              Set
8:         end if
9:     end for
10:end for
11:Output
Algorithm 1 : FairSumm (for fair summarization)

Algorithm for fair summarization: Algorithm 1 presents the algorithm to solve this constrained submodular optimization problem, based on the algorithm developed by Du et al. (Du et al., 2013). The produced by Algorithm 1 is the solution of Equation 6. We now briefly describe the steps of Algorithm 1.

In Step 1, the maximum value of the objective function that can be achieved for a text unit ( ) is calculated and stored in . The purpose of this step is to compute the maximum value of for a single text unit and set a selection threshold (to be described shortly) with respect to this value. This step will help in the subsequent selection of textual units for the creation of the summary to be stored in . (defined in Step 2) is such a threshold at the time step. is updated (decreased by division with a factor ) for = 0, 1, , . is the minimum value of for which holds (see Du et al. (Du et al., 2013) for details) and is set to zero. In Step 3, (the set that will contain the summary) is initialized as an empty set. Note that is supposed to be an independent set according to the definition of matroid given earlier in this section. By the condition (1) in the definition of matroids (stated earlier in this section), an empty set is independent. Step (4) iterates through the different values of . Step (5) tests, for each (text element) , if remains an independent set by the inclusion of it. Only those ’s are chosen in this step whose inclusion expands (already an independent set) to another independent set. Step (6) selects a (permitted by Step (5)) for inclusion in if () - () . This is added to in Step (7). That is, is added to if the increment of by the addition of it is not less than the threshold . For = 0, = , that is, the maximum value of for any ( ). This means, the which maximizes is added to . Note that, there can be multiple ’s for which is maximized. In that case, the tie is broken arbitrarily. The remaining ’s may or may not be added to based on the threshold value.

Another important point to note is that, our chosen (see equation (4)) is designed to maximize both coverage and diversity. So, even if multiple ’s satisfy Step (5), they may not be added to in Step (7) if they contain redundant information. The value of is relaxed for the subsequent values of to allow text elements producing relatively lower increments of to be considered for possible inclusion in . indicates that for the final value of at least one text unit which does not decrement is added to . This ensures that the coverage of the summary produced is not compromised while preserving diversity. This process (Steps (5) to (7)) is repeated for = 0, 1, , , resulting in the final output .

The reason for the efficiency of Algorithm 1 is the fact that this algorithm does not perform exhaustive evaluation of all the possible submodular functions evolving in the intermediate steps of the algorithm. The reduction in the number of steps in the algorithm is achieved mainly by decreasing geometrically by a factor of . In addition, multiple elements can be added to for a single threshold which also expedites the culmination of the algorithm.

7.2. Algorithm II for fair summarizaton

In this section, we discuss our second proposed mechanism for generating fair summaries, which can be used along with any existing summarization algorithm. Many summarization algorithms (including the ones stated in Section 6) generate an importance score of each textual unit in the input. The textual units are then ranked in decreasing order of this importance score, and the top-ranked units are selected to form the summary. Hence, if the ranked list of the textual units can be made fair (according to some fairness notion), then selecting the top from this fair ranked list can generate a fair summary.

Fairness in ranking systems is an important problem that has been addressed recently by some works (Zehlike et al., 2017; Biega et al., 2018). We adopt the fair-ranking methodology developed by Zehlike et al. (Zehlike et al., 2017) to generate fair summaries. The fair-ranking scheme in (Zehlike et al., 2017) considers a two-class setting, with a ‘majority class’ and a ‘minority class’ for which fairness has to be ensured adhering to a ranked group fairness criterion . The proposed ranking algorithm (named FA*IR (Zehlike et al., 2017)) ensures that the proportion of the candidates/items from the minority class in a ranked list never falls below a certain specified threshold. Specifically, two fairness criteria are ensured – selection utility which means every selected item is more qualified than those not selected, and ordering utility which means for every pair of selected candidates, either the more qualified is ranked above or the difference in their qualifications is small (Zehlike et al., 2017).

We propose to use the algorithm in (Zehlike et al., 2017) for fair extractive text summarization as follows. Note that this scheme is only applicable to cases where there are two groups (e.g., the Claritin and MeToo datasets). We consider that group to be the majority class which has the higher number of textual units (tweets) in the input data, while the group having lesser textual units in the input is considered the minority class.

Input and Parameter settings: The algorithm takes as input a set of textual units (to be summarized); the other input parameters (, , and ) taken by the algorithm in (Zehlike et al., 2017) are set as follows.
Qualification () of a candidate: In our summarization setting, this is the goodness value of a textual unit in the data to be summarized. We set this value to the importance score computed by some standard summarization algorithm (e.g., the ones discussed in Section 6) that ranks the text units by their importance scores.
Expected size () of the ranking: The expected number of textual units in the summary ().
Indicator variable () indicating if the candidate is protected: We consider that group to be the minority class which has the lesser number of textual units in the input data. All tweets posted by the minority group are marked as ‘protected’.
Minimum proportion () of protected candidates: We will set this value in the open interval 0, 1 (0 and 1 excluded) so that a particular notion of fairness is ensured in the summary. For instance, if we want equal representation of both classes in the summary, we will set .
Adjusted significance level (): We regulate this parameter in the open interval 0, 1.

Working of the algorithm: Two priority queues (for the textual units of the majority class) and (for the textual units of the minority class), each with capacity , are set to empty. and are initialized by the goodness values () of the majority and minority textual units respectively. Then a ranked group fairness table is created which calculates the minimum number of minority textual units at each rank, given the parameter setting. If this table determines that a textual unit from the minority class needs to added to the summary (being generated), the algorithm adds the best element from to the summary ; otherwise it adds the overall best textual unit (from ) to . Thus a fair summary () of desired length is generated, adhering to a particular notion of fairness (decided by the parameter setting).

Note that since the FA*IR algorithm provides fair ranking for two classes only (Zehlike et al., 2017), we look to apply this algorithm for summarization of data containing tweets from exactly two social groups (i.e., the Claritin and MeToo datasets only). It is an interesting future work to design a fair ranking algorithm for more than two classes, and then to use the algorithm for summarizing data from more than two social groups.

8. Experiments and Evaluation

We now experiment with different methodologies of generating fair summaries, over the three datasets described in Section 3.

8.1. Parameter settings of algorithms

The following parameter settings are used.

For all datasets, we generate all summaries of tweets.

Our first proposed algorithm (FairSumm) uses a similarity function to measure the similarity between two tweets and . We experimented with the following two similarity functions:
TFIDFsim

– we compute TF-IDF scores for each word (unigram) in a dataset, and hence obtain a TF-IDF vector for each textual unit. The similarity

is computed as the cosine similarity between the TF-IDF vectors of and .
Embedsim – we obtain an embedding (a vector of dimension ) for each distinct word in a dataset, either by training Word2vec (Mikolov et al., 2013) on the dataset, or by considering pre-trained GloVe embeddings (Pennington et al., 2014). We obtain an embedding for a tweet by taking the mean embedding of all words contained in the tweet. is computed as the cosine similarity between the embeddings of tweets and .
We found that the performance of the FairSumm algorithm is very similar for both the similarity measures. Hence, we report results for the TFIDFsim similarity measure.

For our second proposed algorithm (the one based on fair ranking), the value of the parameter needs to be decided (see Section 7.2). We try different values of in the interval 0, 1 using grid search, and finally use since this value obtained the best ROUGE scores on the Claritin and MeToo datasets.

8.2. Baseline: Summarizing classes separately

We consider a baseline fair summarization algorithm. Suppose that the textual units in the input belong to classes , and to conform to a desired fairness notion, the summary should have units from class , (using the same notations as in Section 7). The simplest way to generate a fair summary is to separately summarize the textual units belonging to each class , to produce a summary of length , and finally to combine all the summaries to obtain the final summary of length . We refer to this method as the ClasswiseSumm method – specifically, we use our proposed algorithm FairSumm, without any fairness constraints, to summarize each class separately. We compare the performance of this simple baseline method with that of the two proposed fair summarization algorithms in the next section.

Method Nos. of tweets ROUGE-1 ROUGE-2
Female Male Recall Recall
Whole data 2,505 1,532
(62%) (38%)
Without any fairness constraint
FairSumm 37 13 0.548 0.545 0.172 0.171
Fairness: Equal representation
FairSumm 25 25 0.560 0.552 0.188 0.185
ClasswiseSumm 25 25 0.545 0.538 0.172 0.170
Fair-ClusRank 25 25 0.433 0.481 0.135 0.162
Fair-DSDR 25 25 0.285 0.400 0.139 0.206
Fair-LexRank 25 25 0.290 0.370 0.110 0.153
Fair-LSA 25 25 0.513 0.493 0.114 0.109
Fair-LUHN 25 25 0.415 0.429 0.114 0.118
Fair-SumBasic 25 25 0.314 0.436 0.111 0.154
Fair-SummaRNN 25 25 0.356 0.410 0.126 0.154
Fair-SummaCNN 25 25 0.356 0.410 0.126 0.154
Fairness: Proportional representation
FairSumm 31 19 0.572 0.568 0.206 0.202
ClasswiseSumm 31 19 0.550 0.541 0.180 0.173
Fair-ClusRank 31 19 0.439 0.483 0.133 0.159
Fair-DSDR 31 19 0.302 0.425 0.145 0.204
Fair-LexRank 31 19 0.312 0.406 0.115 0.160
Fair-LSA 31 19 0.502 0.487 0.118 0.115
Fair-LUHN 31 19 0.426 0.435 0.119 0.121
Fair-SumBasic 31 19 0.318 0.435 0.116 0.159
Fair-SummaRNN 31 19 0.340 0.394 0.120 0.147
Fair-SummaCNN 31 19 0.340 0.394 0.120 0.147
Table 5. Summarizing the Claritin dataset: Number of tweets written by the two user groups, in the whole dataset and the summaries of length tweets generated by different algorithms. Also given are the ROUGE-1 and ROUGE-2 Recall and scores of each summary.
Method Nos. of tweets ROUGE-1 ROUGE-2
Female Male Recall Recall
Whole data 275 213
(56.3%) (43.7%)
Without any fairness constraint
FairSumm 30 20 0.563 0.569 0.229 0.249
Fairness: Equal representation
FairSumm 25 25 0.616 0.613 0.285 0.296
ClasswiseSumm 25 25 0.587 0.569 0.189 0.196
Fair-ClusRank 25 25 0.499 0.532 0.186 0.198
Fair-DSDR 25 25 0.558 0.574 0.157 0.162
Fair-LexRank 25 25 0.511 0.564 0.209 0.230
Fair-LSA 25 25 0.556 0.541 0.196 0.191
Fair-LUHN 25 25 0.527 0.537 0.207 0.211
Fair-SumBasic 25 25 0.541 0.567 0.180 0.189
Fair-SummaRNN 25 25 0.623 0.629 0.371 0.375
Fair-SummaCNN 25 25 0.623 0.629 0.371 0.375
Fairness: Proportional representation
FairSumm 28 22 0.631 0.648 0.311 0.338
ClasswiseSumm 28 22 0.605 0.622 0.279 0.298
Fair-ClusRank 28 22 0.499 0.528 0.174 0.184
Fair-DSDR 28 22 0.565 0.577 0.168 0.172
Fair-LexRank 28 22 0.518 0.564 0.210 0.228
Fair-LSA 28 22 0.560 0.544 0.197 0.191
Fair-LUHN 28 22 0.533 0.541 0.213 0.216
Fair-SumBasic 28 22 0.546 0.569 0.190 0.198
Fair-SummaRNN 28 22 0.622 0.636 0.385 0.394
Fair-SummaCNN 28 22 0.621 0.636 0.385 0.394
Table 6. Summarizing the MeToo dataset: Number of tweets written by the two user groups, in the whole dataset and the summaries of length tweets generated by different algorithms. Also given are the ROUGE-1 and ROUGE-2 Recall and scores of each summary.

8.3. Results and Insights

We now describe the results of applying various fair summarization algorithms over the three datasets. Some sample summaries obtained by using various algorithms are given in the Supplementary Information.

To evaluate the quality of summaries, we compute ROUGE-1 and ROUGE-2 Recall and scores by matching the algorithmically generated summaries with the gold standard summaries (described in Section 3). Table 5 reports the results of summarizing the Claritin dataset. We compute summaries without any fairness constraint, and considering the two fairness notions of equal representation and proportional representation (explained in Section 5). In each case, we state the number of tweets in the summary from the two user groups, and the ROUGE scores of the summary. Similarly, Table 6 and Table 7 report the results for the MeToo dataset and the US-Election dataset respectively.

Method Nos. of tweets ROUGE-1 ROUGE-2
Pro-Rep Pro-Dem Neutral Recall Recall
Whole data 1,309 658 153
(62%) (31%) (7%)
Without any fairness constraint
FairSumm 34 12 4 0.359 0.460 0.074 0.091
Fairness: Equal representation
FairSumm 17 17 16 0.368 0.467 0.078 0.096
ClasswiseSumm 16 16 18 0.363 0.467 0.071 0.088
Fairness: Proportional representation
FairSumm 31 15 4 0.376 0.490 0.094 0.116
ClasswiseSumm 30 15 5 0.367 0.454 0.081 0.100
Fairness: No Adverse Impact
FairSumm 29 17 4 0.371 0.484 0.086 0.102
FairSumm 30 16 4 0.372 0.489 0.087 0.109
FairSumm 31 15 4 0.376 0.490 0.094 0.116
FairSumm 31 16 3 0.371 0.477 0.085 0.096
FairSumm 32 15 3 0.371 0.473 0.085 0.093
Table 7. Summarizing the US-Election dataset: Number of tweets of the three classes, in the whole dataset and the summaries of length tweets generated by different algorithms. Also given are the ROUGE-1 and ROUGE-2 Recall and scores of each summary.

The FairSumm algorithm (our first proposed algorithm) and the ClasswiseSumm algorithm (baseline) are executed over all three datasets. For the two-class Claritin and MeToo datasets, we also apply our second proposed methodology (stated in Section 7.2) where a fair ranking scheme is used for fair summarization (results in Table 5 and Table 6). Specifically, we use our second proposed algorithm over the existing summarization algorithms described in Section 6 such as ClusterRank, LexRank, SummaRNN, SummaCNN, etc. The resulting fair summarization algorithms are denoted as Fair-ClusRank, Fair-LexRank, Fair-SummaRNN, Fair-SummaCNN, and so on. Note that, for generating a fixed length summary, the neural models use only the textual units labeled with , ranked as per their confidence scores. Hence, in Fair-SummaRNN and Fair-SummaCNN methods, we have considered the ranked list of only those textual units that are labeled with .

Insights from the results: We make the following observations from the results shown in Table 5, Table 6 and Table 7.

Summarizing different classes separately does not yield good summarization: Across all datasets, the proposed FairSumm achieves higher ROUGE scores than ClasswiseSumm, considering the same fairness notion. Note that in the ClasswiseSumm approach, the same FairSumm algorithm is used on each class separately. Hence, separately summarizing each class leads to relatively poor summaries, as compared to the proposed FairSumm methodology.

Proposed algorithms are generalizable to different fairness notions: Table 5, Table 6 and Table 7 demonstrate that both the proposed algorithms are generalizable to various fairness notions. We demonstrate summaries conforming to equal representation and proportional representation for all three datasets. Additionally, Table 7 shows different summaries that can be generated using FairSumm considering the ‘no adverse impact’ fairness notion (such rows are omitted from other tables for brevity).

In general, summaries conforming to proportional representation achieve higher ROUGE scores than summaries conforming to other fairness notions, probably because the human assessors intuitively attempt to represent different groups in a similar proportion in the gold standard summaries as what occur in the input data.

Ensuring fairness does not lead to much degradation in summary quality: For all three datasets, we observe that FairSumm with fairness constraints always achieves higher ROUGE scores than FairSumm without any fairness constraints. Also, we can compare Table 2 with Table 5 (both on Claritin dataset), and Table 4 with Table 6 (both on MeToo dataset) to compare the performances of the existing summarization algorithms (e.g., DSDR, LexRank, SummaRNN) without any fairness constraint, and after their outputs are made fair using the methodology in Section 7.2. We find that the performances in the two scenarios are comparable to each other. In fact, for a few cases, the ROUGE scores marginally improve after the summaries generated by an algorithm are made fair, over those of the original summary generated by the same algorithm. Thus, making summaries fair does not lead to much degradation in summary quality (as measured by ROUGE scores).

Overall, the results signify that, the proposed fair summarization algorithms can not only ensure various fairness notions in the summaries, but also can generate summaries that achieve comparable (or better) ROUGE scores than many well-known summarization algorithms (which often do not generate fair summaries, as demonstrated in Section 6).

9. Conclusion

To our knowledge, this work is the first attempt to develop a fairness-preserving text summarization algorithm. Through experiments on several user generated microblog datasets, we show that existing algorithms often produce summaries that are not fair, even though the text written by different social groups are of comparable quality. Note that, we do not claim that any of the existing algorithms is intentionally biased towards/against any social group. Rather, since these algorithms only attempt to optimize some other metric (e.g., textual quality of the summary), the unfairness comes as an aftermath.

We propose two algorithms that can generate high-quality summaries that conform to various standard notions of fairness. The proposed algorithms will help in addressing the concern that using a (inadvertently) ‘biased’ summarization algorithm can reduce the visibility of the voice/opinion of a certain social group in the summary. Moreover, downstream applications that use the summaries (e.g., for opinion classification and rating inference (Lloret et al., 2010)) would benefit from a fair summary. We believe that this work will open up interesting research problems on fair summarization, such as extending the concept of fairness to abstractive summaries, estimating user preferences for fair summaries in various applications, and so on.

Moreover, when we take a look at the big picture, fairness-preserving information filtering algorithms like the ones proposed in this work can be of significant societal importance. Today social media sites are the gateway to information; and the algorithms (search, recommendation, sampling, summarization etc.) used by these sites are the gatekeepers. People get to see what these different algorithms tailor for them to see. However, these algorithms, being keyed to relevance or quality, lack the sense of embedded ethics or civic responsibilities, and hence may not be fully suitable to curate information for the heterogeneous society. As a result, the flow of information in the society is no longer balanced. Lack of perceptions in issues involving different social sub-groups can eventually make the society disintegrated. Hence, a sense of fairness in algorithms is the need of the hour. As discussed in section 2, recently algorithms are undergoing a fair-revolution. This revolution must bring the journalistic ethics to the Web; barring which a smooth functioning of the society seems improbable.

References

  • (1)
  • Sea (2019) 2019. Twitter Search API. https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets.html. (2019).
  • Alguliev et al. (2011) Rasim M Alguliev, Ramiz M Aliguliyev, Makrufa S Hajirahimova, and Chingiz A Mehdiyev. 2011. MCMR: Maximum coverage and minimum redundant text summarization model. Expert Systems with Applications 38, 12 (2011).
  • Allahyari et al. (2017) Mehdi Allahyari, Seyed Amin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D. Trippe, Juan B. Gutierrez, and Krys Kochut. 2017. Text Summarization Techniques: A Brief Survey. (2017). http://arxiv.org/abs/1707.02268
  • Babaei et al. (2018) Mahmoudreza Babaei, Juhi Kulshrestha, Abhijnan Chakraborty, Fabrício Benevenuto, Krishna P Gummadi, and Adrian Weller. 2018. Purple feed: Identifying high consensus news posts on social media. In Proceedings of AAAI/ACM AIES.
  • Badanidiyuru et al. (2014) Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. 2014. Streaming submodular maximization: Massive data summarization on the fly. In ACM KDD.
  • Baeza-Yates (2018) Ricardo Baeza-Yates. 2018. Bias on the web. Commun. ACM 61, 6 (2018), 54–61.
  • Biddle (2006) Dan Biddle. 2006. Adverse Impact and Test Validation: A Practitioner’s Guide to Valid and Defensible Employment Testing. Routledge.
  • Biega et al. (2018) Asia J. Biega, Krishna P. Gummadi, and Gerhard Weikum. 2018. Equity of Attention: Amortizing Individual Fairness in Rankings. In ACM SIGIR Conference.
  • Bonchi et al. (2017) Francesco Bonchi, Sara Hajian, Bud Mishra, and Daniele Ramazzotti. 2017. Exposing the probabilistic causal structure of discrimination.

    International Journal of Data Science and Analytics

    3, 1 (2017).
  • Burke et al. (2018) Robin Burke, Nasim Sonboli, and Aldo Ordonez-Gauger. 2018. Balanced neighborhoods for multi-sided fairness in recommendation. In Conference on Fairness, Accountability and Transparency. 202–214.
  • Celis et al. (2018) Elisa L. Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria, and Nisheeth K. Vishnoi. 2018. Fair and Diverse DPP-based Data Summarization. CoRR abs/1802.04023 (2018). http://arxiv.org/abs/1802.04023
  • Celis et al. (2016) L Elisa Celis, Amit Deshpande, Tarun Kathuria, and Nisheeth K Vishnoi. 2016. How to be Fair and Diverse? arXiv preprint arXiv:1610.07183 (2016).
  • Chakraborty et al. (2017) Abhijnan Chakraborty, Johnnatan Messias, Fabricio Benevenuto, Saptarshi Ghosh, Niloy Ganguly, and Krishna P Gummadi. 2017. Who makes trends? understanding demographic biases in crowdsourced recommendations.
  • Chakraborty et al. (2019) Abhijnan Chakraborty, Gourab K Patro, Niloy Ganguly, Krishna P Gummadi, and Patrick Loiseau. 2019. Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations. In ACM FAT.
  • Chierichetti et al. (2017) Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. 2017. Fair Clustering Through Fairlets. In NIPS.
  • claritin-dataset (2013) claritin-dataset 2013. Discovering Drug Side Effects with Crowdsourcing. (2013). https://www.crowdflower.com/discovering-drug-side-effects-with-crowdsourcing/.
  • Darwish et al. (2017) K. Darwish, W. Magdy, and Zanouda T. 2017. Trump vs. Hillary: What Went Viral During the 2016 US Presidential Election. In SocInfo.
  • Dong (2018) Yue Dong. 2018. A Survey on Neural Network-Based Summarization Methods. arXiv preprint arXiv:1804.04589 (2018).
  • Du et al. (2013) Nan Du, Yingyu Liang, Maria-Florina Balcan, and Le Song. 2013. Continuous-Time Influence Maximization for Multiple Items. CoRR abs/1312.2164 (2013). arXiv:1312.2164 http://arxiv.org/abs/1312.2164
  • Dupret and Piwowarski (2008) Georges E Dupret and Benjamin Piwowarski. 2008. A user browsing model to predict search engine click data from past observations.. In Proceedings of ACM SIGIR.
  • Dutta et al. (2018) Soumi Dutta, Vibhash Chandra, Kanav Mehra, Asit Kr. Das, Tanmoy Chakraborty, and Saptarshi Ghosh. 2018. Ensemble Algorithms for Microblog Summarization. IEEE Intelligent Systems 33, 3 (2018), 4–14.
  • Dwork et al. (2012) Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. ACM, 214–226.
  • Erkan and Radev (2004) Günes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based Lexical Centrality As Salience in Text Summarization. J. Artif. Int. Res. 22, 1 (2004).
  • Friedman and Nissenbaum (1996) Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM TOIS (1996).
  • Garg et al. (2009) Nikhil Garg and others. 2009. Clusterrank: a graph based method for meeting summarization. In INTERSPEECH.
  • Gong and Liu (2001) Yihong Gong and Xin Liu. 2001. Generic Text Summarization Using Relevance Measure and Latent Semantic Analysis. In ACM SIGIR.
  • Gosepath (2011) Stefan Gosepath. 2011. Equality. In The Stanford Encyclopedia of Philosophy (spring 2011 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
  • Gupta and Lehal (2010) Vishal Gupta and Gurpreet Singh Lehal. 2010. A Survey of Text Summarization Extractive Techniques. IEEE Journal of Emerging Technologies in Web Intelligence 2, 3 (2010).
  • Hajian et al. (2016) Sara Hajian, Francesco Bonchi, and Carlos Castillo. 2016. Algorithmic bias: From discrimination discovery to fairness-aware data mining. In ACM KDD.
  • Hajian et al. (2014) Sara Hajian, Josep Domingo-Ferrer, and Oriol Farràs. 2014. Generalization-based privacy preservation and discrimination prevention in data publishing and mining. Data Mining and Knowledge Discovery 28 (2014).
  • Hardt et al. (2016) Moritz Hardt, Eric Price, Nati Srebro, and others. 2016.

    Equality of opportunity in supervised learning. In

    NIPS.
  • He et al. (2012) Zhanying He and others. 2012. Document Summarization Based on Data Reconstruction. In AAAI.
  • Lin (2004) Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Proc. Workshop on Text Summarization Branches Out, ACL.
  • Lin and Bilmes (2011) Hui Lin and Jeff Bilmes. 2011. A Class of Submodular Functions for Document Summarization. In ACL (HLT ’11).
  • Lloret et al. (2010) Elena Lloret, Horacio Saggion, and Manuel Palomar. 2010. Experiments on summary-based opinion classification. In NAACL HLT.
  • Luhn (1958) H. P. Luhn. 1958. The Automatic Creation of Literature Abstracts. IBM J. Res. Dev. 2, 2 (1958).
  • Luong et al. (2011) Binh Thanh Luong, Salvatore Ruggieri, and Franco Turini. 2011. k-NN as an implementation of situation testing for discrimination discovery and prevention. In ACM KDD.
  • Mikolov et al. (2013) T. Mikolov, W.T. Yih, and G. Zweig. 2013. Linguistic Regularities in Continuous Space Word Representations. In NAACL HLT.
  • Miller (1995) George A. Miller. 1995. WordNet: A Lexical Database for English. Commun. ACM 38, 11 (1995), 39–41.
  • Nallapati et al. (2017) Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents.. In AAAI. 3075–3081.
  • Nenkova and Vanderwende (2005) Ani Nenkova and Lucy Vanderwende. 2005. The impact of frequency on summarization. Technical Report. Microsoft Research.
  • O‘Connor et al. (2014) Karen O‘Connor, Pranoti Pimpalkhute, Azadeh Nikfarjam, Rachel Ginn, Karen L Smith, and Graciela Gonzalez. 2014. Pharmacovigilance on twitter? Mining tweets for adverse drug reactions. In AMIA annual symposium proceedings.
  • Oxley (2006) James G. Oxley. 2006. Matroid Theory (Oxford Graduate Texts in Mathematics). Oxford University Press, Inc.
  • Pedreshi et al. (2008) Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware data mining. In ACM KDD.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proc. EMNLP.
  • Radev et al. (2002) Dragomir R. Radev, Eduard Hovy, and Kathleen McKeown. 2002. Introduction to the Special Issue on Summarization. Comput. Linguist. 28, 4 (2002).
  • Rawls (2009) John Rawls. 2009. A theory of justice. Harvard university press.
  • Roemer (2009) John E Roemer. 2009. Equality of opportunity. Harvard University Press.
  • Salton (1989) Gerard Salton. 1989. Automatic text processing: The transformation, analysis, and retrieval of. Reading: Addison-Wesley (1989).
  • Singh and Joachims (2018) Ashudeep Singh and Thorsten Joachims. 2018. Fairness of exposure in rankings. In Proceedings of ACM SIGKDD.
  • Wu et al. (2018) Yongkai Wu, Lu Zhang, and Xintao Wu. 2018. On Discrimination Discovery and Removal in Ranked Data using Causal Graph. In Proceedings of ACM SIGKDD.
  • Zafar et al. (2017) Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. 2017. Fairness Constraints: Mechanisms for Fair Classification. In AIStats.
  • Zehlike et al. (2017) Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking Algorithm. In ACM CIKM.
  • Zemel et al. (2013) Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In ICML.