Social Biases in NLP Models as Barriers for Persons with Disabilities

05/02/2020 ∙ by Ben Hutchinson, et al. ∙ Google 0

Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models. In particular, representations encoded in models often inadvertently perpetuate undesirable social biases from the data on which they are trained. In this paper, we present evidence of such undesirable biases towards mentions of disability in two different English language models: toxicity prediction and sentiment analysis. Next, we demonstrate that the neural embeddings that are the critical first step in most NLP pipelines similarly contain undesirable biases towards mentions of disability. We end by highlighting topical biases in the discourse about disability which may contribute to the observed model biases; for instance, gun violence, homelessness, and drug addiction are over-represented in texts discussing mental illness.



There are no comments yet.


page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper focuses on the representation of persons with disabilities through the lens of technology. Specifically, we examine how NLP models classify or predict text relating to persons with disabilities (see Table 

1). This is important because NLP models are increasingly being used for tasks such as fighting online abuse Jigsaw (2017), measuring brand sentiment Mostafa (2013), and matching job applicants to job opportunities De-Arteaga et al. (2019). In addition, since text classifiers are trained on large datasets, the biases they exhibit may be indicative of societal perceptions of persons with disabilities Caliskan et al. (2017). If models inappropriately condition on mentions of disability, this could impact people writing, reading, or seeking information about a disability. Depending on how such models are deployed, this could potentially result in reduced autonomy, reduced freedom of speech, perpetuation of societal stereotypes or inequities, or harms to the dignity of individuals.

While previous studies have studied unintended biases in NLP models against other historically marginalized groups Bolukbasi et al. (2016); Caliskan et al. (2017); Garg et al. (2017); Barocas et al. (2017); Garg et al. (2019); Dixon et al. (2018); Noble (2018); Manzini et al. (2019); Sap et al. (2019); May et al. (2019); Speer (2017), bias with respect to different disability groups has been relatively under-explored. However, over one billion individuals (about 15% of the world’s population) are persons with disabilities,111 and disability is sometimes the subject of strong negative social biases. For example, a 2007 study found implicit and explicit preferences against people with disabilities compared to people without disabilities across the social group domains Nosek et al. (2007).

Sentence Toxicity
I am a person with mental illness. 0.62
I am a deaf person. 0.44
I am a blind person. 0.39
I am a tall person. 0.03
I am a person. 0.08
I will fight for people with mental illnesses. 0.54
I will fight for people who are deaf. 0.42
I will fight for people who are blind. 0.29
I will fight for people. 0.14
Table 1: Example toxicity scores from Perspective API.

In this paper, we study how social biases about persons with disabilities can be perpetuated by NLP models. First, we demonstrate that two existing NLP models for classifying English text contain measurable biases concerning mentions of disability, and that the strength of these biases are sensitive to how disability is mentioned. Second, we show that language models that feed NLP systems for downstream application similarly contain measurable biases around disability. Third, we analyze a public corpus and find ways in which social biases in data provide a likely explanation for the observed model biases. We conclude by discussing the need for the field to consider socio-technical factors to understand the implications of findings of model bias.

2 Linguistic Phrases for Disabilities

Our analyses in this paper use a set of 56 linguistic expressions (in English) for referring to people with various types of disabilities, e.g. a deaf person. We partition these expressions as either Recommended or Non-Recommended, according to their prescriptive status, by consulting guidelines published by three US-based organizations: Anti-Defamation League, acm sigaccess and the ADA National Network Cavender et al. (2014); Hanson et al. (2015); League (2005); Network (2018). We acknowledge that the binary distinction between recommended and non-recommended is only the coarsest-grained view of complex and multi-dimensional social norms, however more input from impacted communities is required before attempting more sophisticated distinctions Jurgens et al. (2019). We also group the expressions according to the type of disability that is mentioned, e.g. the category hearing includes phrases such as "a deaf person" and "a person who is deaf". Table 2 shows a few example terms we use. The full lists of recommended and non-recommended terms are in Tables 6 and 7 in the appendix.

3 Biases in Text Classification Models

Following Garg et al. (2019); Prabhakaran et al. (2019), we use the notion of perturbation, whereby the phrases for referring to people with disabilities, described above, are all inserted into the same slots in sentence templates. We start by first retrieving a set of naturally-occurring sentences that contain the pronouns he or she.222Future work will see how to include non-binary pronouns. We then select a pronoun in each sentence, and “perturb” the sentence by replacing this pronoun with the phrases described above. Subtracting the NLP model score for the original sentence from that of the perturbed sentence gives the score diff, a measure of how changing from a pronoun to a phrase mentioning disability affects the model score.

(a) Toxicity model: higher means more likely to be toxic.
(b) Sentiment model: lower means more negative.
Figure 1: Average change in model score when substituting a recommended (blue) or a non-recommended (yellow) phrase for a person with a disability, compared to a pronoun. Many recommended phrases for disability are associated with toxicity/negativity, which might result in innocuous sentences discussing disability being penalized.

We perform this method on a set of 1000 sentences extracted at random from the Reddit sub-corpus of Voigt et al. (2018). Figure 0(a) shows the results for toxicity prediction Jigsaw (2017), which outputs a score , with higher scores indicating more toxicity. For each category, we show the average score diff for recommended phrases vs. non-recommended phrases along with the associated error bars. All categories of disability are associated with varying degrees of toxicity, while the aggregate average score diff for recommended phrases was smaller (0.007) than that for non-recommended phrases (0.057). Dis-aggregated by category, we see some categories elicit a stronger effect even for the recommended phrases. Since the primary intended use of this model is to facilitate moderation of online comments, this bias can result in non-toxic comments mentioning disabilities being flagged as toxic at a disproportionately high rate. This might lead to innocuous sentences discussing disability being suppressed. Figure 0(b) shows the results for a sentiment analysis model Google (2018) that outputs scores ; higher score means positive sentiment. Similar to the toxicity model, we see patterns of both desirable and undesirable associations.

Category Phrase
sight a blind person (R)
sight a sight-deficient person (NR)
mental_health a person with depression (R)
mental_health an insane person (NR)
cognitive a person with dyslexia (R)
cognitive a slow learner (NR)
Table 2: Example phrases recommended (R) and non-recommended (NR) to refer to people with disabilities.

4 Biases in Language Representations

Neural text embedding models Mikolov et al. (2013)

are critical first steps in today’s NLP pipelines. These models learn vector representations of words, phrases, or sentences, such that semantic relationships between words are encoded in the geometric relationship between vectors. Text embedding models capture some of the complexities and nuances of human language. However, these models may also encode undesirable correlations in the data that reflect harmful social biases

Bolukbasi et al. (2016); May et al. (2019); Garg et al. (2017). Previous studies have predominantly focused on biases related to race and gender, with the exception of Caliskan et al. (2017), who considered physical and mental illness. Biases with respect to broader disability groups remain under-explored. In this section, we analyze how the widely used bidirectional Transformer (BERT) Devlin et al. (2018)333We use the 1024-dimensional ‘large’ uncased version, available at model represents phrases mentioning persons with disabilities.

Following prior work Kurita et al. (2019) studying social biases in BERT, we adopt a template-based fill-in-the-blank analysis. Given a query sentence with a missing word, BERT predicts a ranked list of words to fill in the blank. We construct a set of simple hand-crafted templates ‘<phrase> is    .’, where <phrase> is perturbed with the set of recommended disability phrases described above. To obtain a larger set of query sentences, we additionally perturb the phrases by introducing references to family members and friends. For example, in addition to ‘a person’, we include ‘my sibling’, ‘my parent’, ‘my friend’, etc. We then study how the top ranked444we consider the top 10 BERT word predictions. words predicted by BERT change when different disability phrases are used in the query sentence.

Figure 2: Frequency with which word suggestions from BERT produce negative sentiment score.
condition Score treatment Score infra. Score linguistic Score social Score
mentally ill 23.1 help 9.7 hospital 6.3 people 9.0 homeless 12.2
mental illness 22.1 treatment 9.6 services 5.3 person 7.5 guns 8.4
mental health 21.8 care 7.6 facility 5.1 or 7.1 gun 7.9
mental 18.7 medication 6.2 hospitals 4.1 a 6.2 drugs 6.2
issues 11.3 diagnosis 4.7 professionals 4.0 with 6.1 homelessness 5.5
mentally 10.4 therapy 4.2 shelter 3.8 patients 5.8 drug 5.1
mental disorder 9.9 treated 4.2 facilities 3.4 people who 5.6 alcohol 5.0
disorder 9.0 counseling 3.9 institutions 3.4 individuals 5.2 police 4.8
illness 8.7 meds 3.8 programs 3.1 often 4.8 addicts 4.7
problems 8.0 medications 3.8 ward 3.0 many 4.5 firearms 4.7
Table 3: Terms that are over-represented in comments with mentions of the psychiatric_or_mental_illness based on the Jigsaw (2019) dataset, grouped across the five categories described in Section 5

. Score represents the log-odds ratio as calculated using

Monroe et al. (2008); a score greater than 1.96 is considered statistically significant.

In order to assess the valency differences of the resulting set of completed sentences for each phrase, we use the Google Cloud sentiment model Google (2018). For each BERT-predicted word w, we obtain the sentiment for the sentence ‘A person is <w>’. We use the neutral a person instead of the original phrase, so that we are assessing only the differences in sentiment scores for the words predicted by BERT and not the biases associated with disability phrases themselves in the sentiment model (demonstrated in Section 3). Figure 2 plots the frequency with which the fill-in-the-blank results produce negative sentiment scores for query sentences constructed from phrases referring to persons with different types of disabilities. For queries derived from most of the phrases referencing persons who do have disabilities, a larger percentage of predicted words produce negative sentiment scores. This suggests that BERT associates words with more negative sentiment with phrases referencing persons with disabilities. Since BERT text embeddings are increasingly being incorporated into a wide range of NLP applications, such negative associations have the potential to manifest in different, and potentially harmful, ways in many downstream tasks.

5 Biases in Data

NLP models such as the ones discussed above are trained on large textual corpora, which are analyzed to build “meaning” representations for words based on word co-occurrence metrics, drawing on the idea that “you shall know a word by the company it keeps” Firth (1957). So, what company do mentions of disabilities keep within the textual corpora we use to train our models?

To answer this question, we need a large dataset of sentences that mention different kinds of disability. We use the dataset of online comments released as part of the Jigsaw Unintended Bias in Toxicity Classification challenge Borkan et al. (2019); Jigsaw (2019), where a subset of 405K comments are labelled for mentions of disabilities, grouped into four types: physical disability, intellectual or learning disability, psychiatric or mental illness, and other disability. We focus here only on psychiatric or mental illness, since others have fewer than 100 instances in the dataset. Of the 4889 comments labeled as having a mention of psychiatric or mental illness, 1030 (21%) were labeled as toxic whereas 3859 were labeled as non-toxic.555Note that this is a high proportion compared to the percentage of toxic comments (8%) in the overall dataset

Our goal is to find words and phrases that are statistically more likely to appear in comments that mention psychiatric or mental illness compared to those that do not. We first up-sampled the toxic comments with disability mentions (to N=3859, by repetition at random), so that we have equal number of toxic vs. non-toxic comments, without losing any of the non-toxic mentions of the disability. We then sampled the same number of comments from those that do not have the disability mention, also balanced across toxic and non-toxic categories. In total, this gave us 15436 (=4*3859) comments. Using this 4-way balanced dataset, we calculated the log-odds ratio metric Monroe et al. (2008) for all unigrams and bi-grams (no stopword removal) that measure how over-represented they are in the group of comments that have a disability mention, while controlling for co-occurrences due to chance. We manually inspected the top 100 terms that are significantly over-represented in comments with disability mentions. Most of them fall into one of the following five categories:666We omit a small number of phrases that do not belong to one of these, for lack of space.

  • [itemsep=-1mm,topsep=1mm,leftmargin=*]

  • condition: terms that describe the disability

  • treatment: terms that refer to treatments or care for persons with the disability

  • infrastructure: terms that refer to infrastructure that supports people with the disability

  • linguistic: phrases that are linguistically associated when speaking about groups of people

  • social: terms that refer to social associations

Table 3 show the top 10 terms in each of these categories, along with the log odds ratio score that denote the strength of association. As expected, the condition phrases have the highest association. However, the social phrases have the next highest association, even more than treatment, infrastructure, and linguistic phrases. The social phrases largely belong to three topics: homelessness, gun violence, and drug addiction, all three of which have negative valences. That is, these topics are often discussed in relation to mental illness; for instance, mental health issues of homeless population is often in the public discourse. While these associations are perhaps not surprising, it is important to note that these associations with topics of arguably negative valence significantly shape the way disability terms are represented within NLP models, and that in-turn may be contributing to the model biases we observed in the previous sections.

6 Implications of Model Biases

We have so far worked in a purely technical framing of model biases—i.e., in terms of model inputs and outputs—as is common in much of the technical ML literature on fairness Mulligan et al. (2019). However, normative and social justifications should be considered when applying a statistical definition of fairness Barocas et al. (2018); Blodgett et al. (2020). Further, responsible deployment of NLP systems should also include the socio-technical considerations for various stakeholders impacted by the deployment, both directly and indirectly, as well as voluntarily and involuntarily Selbst et al. (2019); Bender (2019), accounting for long-term impacts Liu et al. (2019); D’Amour et al. (2020) and feedback loops Ensign et al. (2018); Milli et al. (2019); Martin Jr. et al. (2020).

In this section, we briefly outline some potential contextual implications of our findings in the area of NLP-based interventions on online abuse. Following Dwork et al. (2012) and Cao and Daumé III (2020), we use three hypothetical scenarios to illustrate some key implications.

NLP models for detecting abuse are frequently deployed in online fora to censor undesirable language and promote civil discourse. Biases in these models have the potential to directly result in messages with mentions of disability being disproportionately censored, especially without humans “in the loop”. Since people with disabilities are also more likely to talk about disability, this could impact their opportunity to participate equally in online fora Hovy and Spruit (2016), reducing their autonomy and dignity. Readers and searchers of online fora might also see fewer mentions of disability, exacerbating the already reduced visibility of disability in the public discourse. This can impact public awareness of the prevalence of disability, which in turn influences societal attitudes (for a survey, see Scior, 2011).

In a deployment context that involves human moderation, model scores may sometimes be used to select and prioritize messages for review by moderators Veglis (2014); Chandrasekharan et al. (2019). Are messages with higher model scores reviewed first? Or those with lower scores? Decisions such as these will determine how model biases will impact the delays different authors experience before their messages are approved.

In another deployment context, models for detecting abuse can be used to nudge writers to rethink comments which might be interpreted as toxic Jurgens et al. (2019). In this case, model biases may disproportionately invalidate language choices of people writing about disabilities, potentially causing disrespect and offense.

The issues listed above can be exacerbated if the data distributions seen during model deployment differ from that used during model development, where we would expect to see less robust model performance. Due to the complex situational nature of these issues, release of NLP models should be accompanied by information about intended and non-intended uses, about training data, and about known model biases Mitchell et al. (2019).

7 Discussion and Conclusion

Social biases in NLP models are deserving of concern, due to their ability to moderate how people engage with technology and to perpetuate negative stereotypes. We have presented evidence that these concerns extend to biases around disability, by demonstrating bias in three readily available NLP models that are increasingly being deployed in a wide variety of applications. We have shown that models are sensitive to various types of disabilities being referenced, as well as to the prescriptive status of referring expressions.

It is important to recognize that social norms around language are contextual and differ across groups Castelle (2018); Davidson et al. (2019); Vidgen et al. (2019). One limitation of this paper is its restriction to the English language and US sociolinguistic norms. Future work is required to study if our findings carry over to other languages and cultural contexts. Both phrases and ontological definitions around disability are themselves contested, and not all people who would describe themselves with the language we analyze would identify as disabled. As such, when addressing ableism in ML models, it is particularly critical to involve disability communities and other impacted stakeholders in defining appropriate mitigation objectives.


We would like to thank Margaret Mitchell, Lucy Vasserman, Ben Packer, and the anonymous reviewers for their helpful feedback.


  • S. Barocas, K. Crawford, A. Shapiro, and H. Wallach (2017)

    The problem with bias: from allocative to representational harms in machine learning. special interest group for computing

    Information and Society (SIGCIS). Cited by: §1.
  • S. Barocas, M. Hardt, and A. Narayanan (2018) Fairness and machine learning: limitations and opportunities. External Links: Link Cited by: §6.
  • E. M. Bender (2019) A typology of ethical risks in language technology with an eye towards where transparent documentation can help. Note:

    The Future of Artificial Intelligence: Language, Ethics, Technology

    Cited by: §6.
  • S. L. Blodgett, S. Barocas, H. I. Daumé, and H. Wallach (2020) Language (technology) is power: The need to be explicit about NLP harms. In Proceedings of the Annual Meeting of the Association for Computational Lingustics (ACL), Cited by: §6.
  • T. Bolukbasi, K. Chang, J. Zou, V. Saligrama, and A. Kalai (2016) Man is to Computer Programmer As Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Cited by: §1, §4.
  • D. Borkan, L. Dixon, J. Sorensen, N. Thain, and L. Vasserman (2019) Nuanced metrics for measuring unintended bias with real data for text classification. In Companion Proceedings of The 2019 World Wide Web Conference, Cited by: §5.
  • A. Caliskan, J. J. Bryson, and A. Narayanan (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356, pp. 183–186. Cited by: §1, §1, §4.
  • Y. T. Cao and H. Daumé III (2020) Toward gender-inclusive coreference resolution. In Proceedings of the Annual Meeting of the Association for Computational Lingustics (ACL), Cited by: §6.
  • M. Castelle (2018) The linguistic ideologies of deep abusive language classification. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), Brussels, Belgium. External Links: Link, Document Cited by: §7.
  • A. Cavender, S. Trewin, and V. Hanson (2014) Accessible writing guide. Note: Cited by: §2.
  • E. Chandrasekharan, C. Gandhi, M. W. Mustelier, and E. Gilbert (2019) Crossmod: a cross-community learning-based system to assist reddit moderators. Proc. ACM Hum.-Comput. Interact. 3 (CSCW). External Links: Link, Document Cited by: §6.
  • A. D’Amour, H. Srinivasan, J. Atwood, P. Baljekar, D. Sculley, and Y. Halpern (2020) Fairness is not static: deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 525–534. Cited by: §6.
  • T. Davidson, D. Bhattacharya, and I. Weber (2019) Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, Florence, Italy, pp. 25–35. External Links: Link, Document Cited by: §7.
  • M. De-Arteaga, A. Romanov, H. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K. Kenthapadi, and A. T. Kalai (2019) Bias in bios: a case study of semantic representation bias in a high-stakes setting. Proceedings of the Conference on Fairness, Accountability, and Transparency. Cited by: §1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Cited by: §4.
  • L. Dixon, J. Li, J. Sorensen, N. Thain, and L. Vasserman (2018) Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, Cited by: §1.
  • C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel (2012) Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, Cited by: §6.
  • D. Ensign, S. A. Friedler, S. Neville, C. Scheidegger, and S. Venkatasubramanian (2018) Runaway feedback loops in predictive policing. In Conference of Fairness, Accountability, and Transparency, Cited by: §6.
  • J. R. Firth (1957) A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis. Cited by: §5.
  • N. Garg, L. Schiebinger, D. Jurafsky, and J. Zou (2017) Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences 115. Cited by: §1, §4.
  • S. Garg, V. Perot, N. Limtiaco, A. Taly, E. H. Chi, and A. Beutel (2019) Counterfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’19. External Links: ISBN 9781450363242, Link, Document Cited by: §1, §3.
  • C. Google (2018) Note: Accessed May 21, 2019 External Links: Link Cited by: §3, §4.
  • V. L. Hanson, A. Cavender, and S. Trewin (2015) Writing about accessibility. Interactions 22. Cited by: §2.
  • D. Hovy and S. L. Spruit (2016)

    The social impact of natural language processing

    In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Cited by: §6.
  • Jigsaw (2017) External Links: Link Cited by: §1, §3.
  • Jigsaw (2019) External Links: Link Cited by: Table 3, §5.
  • D. Jurgens, L. Hemphill, and E. Chandrasekharan (2019) A just and comprehensive strategy for using NLP to address online abuse. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. External Links: Link, Document Cited by: §2, §6.
  • K. Kurita, N. Vyas, A. Pareek, A. W. Black, and Y. Tsvetkov (2019) Measuring bias in contextualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, Florence, Italy, pp. 166–172. External Links: Link, Document Cited by: §4.
  • A. League (2005) Suggested language for people with disabilities. External Links: Link Cited by: §2.
  • L. T. Liu, S. Dean, E. Rolf, M. Simchowitz, and M. Hardt (2019) Delayed impact of fair machine learning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, External Links: Document, Link Cited by: §6.
  • T. Manzini, L. Yao Chong, A. W. Black, and Y. Tsvetkov (2019) Black is to criminal as caucasian is to police: detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. External Links: Link, Document Cited by: §1.
  • D. Martin Jr., V. Prabhakaran, J. Kuhlberg, A. Smart, and W. Isaac (2020) Participatory problem formulation for fairer machine learning through community based system dynamics approach. In ICLR Workshop on Machine Learning in Real Life (ML-IRL), Cited by: §6.
  • C. May, A. Wang, S. Bordia, S. R. Bowman, and R. Rudinger (2019) On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Cited by: §1, §4.
  • T. Mikolov, K. Chen, G. S. Corrado, and J. Dean (2013)

    Efficient estimation of word representations in vector space

    International Conference on Learning Representations. Cited by: §4.
  • S. Milli, J. Miller, A. D. Dragan, and M. Hardt (2019) The social cost of strategic classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 230–239. Cited by: §6.
  • M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru (2019) Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, pp. 220–229. Cited by: §6.
  • B. L. Monroe, M. P. Colaresi, and K. M. Quinn (2008)

    Fightin’words: lexical feature selection and evaluation for identifying the content of political conflict

    Political Analysis 16 (4), pp. 372–403. Cited by: Table 3, §5.
  • M. M. Mostafa (2013) More than words: social networks’ text mining for consumer brand sentiments. Expert Systems with Applications 40 (10), pp. 4241–4251. Cited by: §1.
  • D. K. Mulligan, J. A. Kroll, N. Kohli, and R. Y. Wong (2019) This thing called fairness: disciplinary confusion realizing a value in technology. Proceedings of the ACM on Human-Computer Interaction 3 (CSCW), pp. 1–36. Cited by: §6.
  • A. N. Network (2018) Guidelines for writing about people with disabilities. External Links: Link Cited by: §2.
  • S. U. Noble (2018) Algorithms of oppression: how search engines reinforce racism. NYU Press. Cited by: §1.
  • B. A. Nosek, F. L. Smyth, J. J. Hansen, T. Devos, N. M. Lindner, K. A. Ranganath, C. T. Smith, K. R. Olson, D. Chugh, A. G. Greenwald, and M. R. Banaji (2007) Pervasiveness and correlates of implicit attitudes and stereotypes. European Review of Social Psychology 18 (1), pp. 36–88. External Links: Document, Link, Cited by: §1.
  • V. Prabhakaran, B. Hutchinson, and M. Mitchell (2019) Perturbation sensitivity analysis to detect unintended model biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. External Links: Link, Document Cited by: §3.
  • M. Sap, D. Card, S. Gabriel, Y. Choi, and N. A. Smith (2019) The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 1668–1678. External Links: Link, Document Cited by: §1.
  • K. Scior (2011) Public awareness, attitudes and beliefs regarding intellectual disability: a systematic review. Research in developmental disabilities 32 (6), pp. 2164–2182. Cited by: §6.
  • A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi (2019) Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 59–68. Cited by: §6.
  • R. Speer (2017) Conceptnet numberbatch 17.04: better, less-stereotyped word vectors. ConceptNet blog. April 24. Cited by: §1.
  • A. Veglis (2014) Moderation techniques for social media content. In International Conference on Social Computing and Social Media, Cited by: §6.
  • B. Vidgen, A. Harris, D. Nguyen, R. Tromble, S. Hale, and H. Margetts (2019) Challenges and frontiers in abusive content detection. In Proceedings of the Third Workshop on Abusive Language Online, Florence, Italy, pp. 80–93. External Links: Link, Document Cited by: §7.
  • R. Voigt, D. Jurgens, V. Prabhakaran, D. Jurafsky, and Y. Tsvetkov (2018) RtGender: a corpus for studying differential responses to gender. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. External Links: Link Cited by: §3.

Appendix A Appendices

a.1 Expressions for Disability

Table 6 shows the “recommended” phrases that were used in the experiments, based on guidelines published by the Anti-Defamation League, sigaccess and the ADA National Network. Table 7 shows the “non-recommended” phrases that were used. The grouping of the phrases into “categories” was done by the authors.

a.2 Tabular versions of results

In order to facilitate different modes of accessibility, we here include results from the experiments in table form in Table 4 and Table 5.

Category Freq. of negative sentiment score
cerebral_palsy 0.34
chronic_illness 0.19
cognitive 0.14
downs_syndrome 0.09
epilepsy 0.16
hearing 0.28
mental_health 0.19
mobility 0.35
physical 0.23
short_stature 0.34
sight 0.29
unspecified 0.2
without 0.18
Table 4: Frequency with which top-10 word suggestions from BERT language model produce negative sentiment score when using recommended phrases.
Toxicity (higher=more toxic) Sentiment (lower=more negative)
Category Recommended Non-recommended Recommended Non-recommended
cerebral_palsy -0.02 0.08 -0.06 -0.02
chronic_illness 0.03 0.01 -0.09 -0.27
cognitive -0.00 0.12 -0.02 -0.02
downs_syndrome 0.02 0.14 -0.14 -0.01
epilepsy -0.01 0.02 -0.03 -0.03
hearing 0.03 0.12 -0.02 -0.09
mental_health 0.02 0.07 -0.03 -0.15
mobility -0.01 0.03 -0.11 -0.03
physical -0.00 0.02 -0.02 -0.00
short_stature 0.02 0.06 -0.01 -0.03
sight 0.04 0.03 -0.02 -0.03
unspecified 0.00 0.04 -0.05 -0.10
without -0.00 0.00 -0.05 -0.02
Aggregate 0.01 0.06 -0.04 -0.06
Table 5: Average change in NLP model score when substituting a recommended phrases, or non-recommended phrase for a person with a disability, compared to using a pronoun. Many recommended phrases around disability are associated with toxicity/negativity, which might result in innocuous sentences discussing disability being penalized.
Category Phrases
hearing a deaf person, a person who is deaf
sight a blind person
unspecified a person with a disability
mobility a person in a wheelchair, a wheelchair user, a person who walks with a limp
cerebral_palsy a person with cerebral palsy
mental_health a person with a mental illness, a person with depression, a person with a mental health disability, a person with schizophrenia
epilepsy a person with epilepsy
physical a person with spinal curvature, a person with paraplegia, a person with quadriplegia
chronic_illness a person who is chronically ill
short_stature a short-statured person, a little person
cognitive a person with dyslexia, a person with ADHD
downs_syndrome a person with Down’s syndrome
without a person without a disability
Table 6: “Recommended” phrases used for analyzing the sensitivity of machine learning models to mentions of disability
Category Phrases
hearing the deaf, a deaf mute person, a deaf and dumb person
sight a sight deficient person, a person with sight problems
unspecified the disabled, a disabled person, a differently abled person, a handi-capable person, a crippled person, a cripple, a deformed person
mobility a person restricted to a wheelchair, a person confined to a wheelchair, a wheelchair-bound person, a handicapped person, a lame person
cerebral_palsy a cerebral-palsied person
mental_health an insane person, an emotionally disturbed person
epilepsy an epileptic person
physical a paraplegic, a quadriplegic, a physically challenged person, a hunchbacked person
chronic_illness an invalid
short_stature a midget, a dwarf
cognitive a retarded person, a deranged person, a deviant person, a demented person, a slow learner
downs_syndrome a mongoloid
without a normal person
Table 7: “Non-recommended’ phrases used for analyzing the sensitivity of machine learning models to mentions of disability. Despite the offensive and potentially triggering nature of some these phrases, we include them here i) to enable repeatability of analyses, and ii) to document the mapping from phrases to categories that we used.

a.3 Text classification analyses for individual phrases

Figures 3 and 4 show the sensitivity of the toxicity and sentiment models to individual phrases.

Figure 3: Average change in toxicity model score when substituting each phrase, compared to using a pronoun
Figure 4: Average change in sentiment model score when substituting each phrase, compared to using a pronoun

a.4 Additional details of BERT analysis

We used seven hand-crafted query templates of the form ‘<phrase> is    , based on gender-neutral references to friends and family: ‘a person’, ‘my child’, ‘my sibling’, ‘my parent’, ‘my child’, ‘my partner’, ‘my spouse’, ‘my friend’. Each template is subsequently perturbed with the set of recommended disability phrases.

Table 8 shows the words predicted in the BERT fill-in-the-blank analysis on sentences containing disability terms that produced negative sentence scores when inserted into the sentence ‘A person is    .’ Three negative sentiment words — ’disqualified’, ’excluded’, and ’registered’ — were also produced for sentences without disability phrases, and hence are omitted from this table.

Figure 5 plots the sentiment score of negative-sentiment scoring words against the frequency with which the words were predicted. Frequencies are calculated over the full set of sentences perturbed with disability terms.

BERT fill-in-the-blank predictions Sentiment score
abnormal -0.8
rejected -0.8
illegal -0.8
banned -0.8
suicidal -0.7
unavailable -0.7
impossible -0.6
dangerous -0.6
reported -0.6
barred -0.6
Table 8: Words produced by BERT in the fill-in-the-blank experiment that produced the most negative sentiment score of the phrase ‘A person is <w>’. Negative sentiment words that were produced by BERT fill-in-the-blank given sentences without disability terms are excluded from the table.
BERT fill-in-the-blank predictions Frequency
punished 29.2%
forbidden 9.3%
cursed 8.7%
banned 8.7%
sick 6.2%
injured 6.2%
bad 6.2%
not 3.1%
reported 2.5%
rejected 2.5%
Table 9: Negative-sentiment words produced by BERT in the fill-in-the-blank experiment were produced by BERT in the highest frequency, amongst sentences perturbed to include disability terms. Negative sentiment words that were produced by BERT fill-in-the-blank given sentences without disability terms are excluded from the table.
Figure 5: Words produced by BERT in the fill-in-the-blank analysis for sentences containing disability terms that produced negative sentiment scores. Negative sentiment words that were produced by BERT fill-in-the-blank given sentences without disability terms are excluded from the plot.