A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics

07/02/2018
by   Roel Dobbe, et al.
0

Machine learning (ML) is increasingly deployed in real world contexts, supplying actionable insights and forming the basis of automated decision-making systems. While issues resulting from biases pre-existing in training data have been at the center of the fairness debate, these systems are also affected by technical and emergent biases, which often arise as context-specific artifacts of implementation. This position paper interprets technical bias as an epistemological problem and emergent bias as a dynamical feedback phenomenon. In order to stimulate debate on how to change machine learning practice to effectively address these issues, we explore this broader view on bias, stress the need to reflect on epistemology, and point to value-sensitive design methodologies to revisit the design and implementation process of automated decision-making systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/12/2020

Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness

Decision-making systems increasingly orchestrate our world: how to inter...
05/13/2021

Addressing Fairness, Bias and Class Imbalance in Machine Learning: the FBI-loss

Resilience to class imbalance and confounding biases, together with the ...
11/19/2019

"The Human Body is a Black Box": Supporting Clinical Decision-Making with Deep Learning

Machine learning technologies are increasingly developed for use in heal...
11/06/2019

Designing Evaluations of Machine Learning Models for Subjective Inference: The Case of Sentence Toxicity

Machine Learning (ML) is increasingly applied in real-life scenarios, ra...
02/28/2020

Demonstrating Rosa: the fairness solution for any Data Analytic pipeline

Most datasets of interest to the analytics industry are impacted by vari...
01/10/2022

The Dataset Nutrition Label (2nd Gen): Leveraging Context to Mitigate Harms in Artificial Intelligence

As the production of and reliance on datasets to produce automated decis...
11/07/2021

Proposing an Interactive Audit Pipeline for Visual Privacy Research

In an ideal world, deployed machine learning models will enhance our soc...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Data-driven decision-making is rapidly being introduced in high-stakes social domains such as medical clinics, criminal justice, and public infrastructure. The proliferation of biases in these systems leads to new forms of erroneous decision-making, causing disparate treatment or outcomes across populations (Barocas & Selbst, 2016). The ML community is working hard to understand and mitigate the unintended and harmful behavior that may emerge from poor design of real-world automated decision-making systems (Amodei et al., 2016). While many technical tools are being proposed to mitigate these errors, there is insufficient understanding of how the machine learning design and deployment practice can safeguard critical human values such as safety or fairness. The AI Now Institute identifies “a deep need for interdisciplinary, socially aware work that integrates the long history of bias research from the social sciences and humanities into the field of AI research” (Campolo et al., 2017).

How can ML practitioners, often lacking consistent language to go beyond technical descriptions and solutions to “well-defined problems,” engage with fundamentally human aspects in manner that is constructive rather than dismissive or reductive? And how may other disciplines help to enrich the practice? In this paper, we argue that practitioners and researchers need to take a step back and adopt a broader and more holistic view on bias than currently advocated in many classrooms and professional fora. Our discussion emphasizes the need to reflect on questions of epistemology and underlines the importance of dynamical behavior in data-driven decision-making. We do not provide full-fledged answers to the problems presented, but point to methodologies in value-sensitive design and self-reflection to contend more effectively with issues of fairness, accountability, and transparency throughout the design and implementation process of automated decision-making systems.

2 A Broader View On Bias

Most literature addressing issues of fairness in ML has focused on the ways in which models can inherit pre-existing biases from training data. Limiting ourselves to these biases is problematic in two ways. Firstly, it narrows us to look at how these biases lead to allocative harm; a primarily economic view of how systems allocate or withhold an opportunity or resource, such as being granted a loan or held in prison. In her NIPS 2017 keynote, Kate Crawford made the case that at the root of all forms of allocative harm are biases that cause representational harm. This perspective requires us to move beyond biases in the data set and “think about the role of ML in harmful representations of human identity,” and how these biases “reinforce the subordination of groups along the lines of identity” and “affect how groups or individuals are understood socially,” thereby also contributing to harmful attitudes and cultural beliefs in the longer term (Crawford, 2017). It is fair to say that representation issues have been largely neglected by the ML community, potentially because they are hard to formalize and track.

Responsible representation requires analyses beyond scrutinizing a training set, including questioning how sensitive attributes might be represented by different features and classes of models and what governance is needed to complement the model. Additionally, while ML systems are increasingly implemented to provide “actionable insights” and guide decisions in the real world, the core methods still fail to effectively address the inherent dynamic nature of interactions between the automated decision making process and the environment or individuals that are acted upon. This is particularly true in contexts where observations or human responses (such as clicks and likes) are fed back along the way to update the algorithm’s parameters, allowing biases to be further reinforced and amplified.

The tendency of ML-based decision-making systems to formalize and reinforce socially sensitive phenomena necessitates a broader taxonomy of biases that includes risks beyond those pre-existing in the data. As argued by Friedman and Nissenbaum in the nascent days of value-sensitive design methodologies, two other sources of bias naturally occur when designing and employing computer systems, namely technical bias and emergent bias (Friedman, 1996; Friedman & Nissenbaum, 1996).

While understanding pre-existing bias has lent itself reasonably well to statistical approaches for understanding a given data set, technical and emergent bias require engaging with the domain of application and the ways in which the algorithm is used and integrated. For automated decision-making tools to be responsibly integrated in any context, it is critical that designers (1) assess technical bias by reflecting on their epistemology and understanding the values of users and stakeholders, and (2) assess emergent bias by studying the feedback mechanisms that create intimate, ever-evolving coupling between algorithms and the environment they act upon.

3 Technical Bias Is About Epistemology

Friedman describes a source of technical bias as “the attempt to make human constructs amenable to computers - when, for example, we quantify the qualitative, make discrete the continuous, or formalize the nonformal” (Friedman, 1996). This form of bias originates from all the tools used in the process of turning data into a model that can make predictions. While technical bias is domain-specific, we identify four sources in the machine learning pipeline.

Firstly, both collected and existing data  are at some point measured and transformed into a computer readable scale. Depending on the objects measured, each variable may have a different scale, such as nominal, ordinal, interval, or ratio. Consider for example Netflix’s decision to let viewers rate movies with “likes” instead of a 1-5 star rating. As such, movie ratings moved from an ordinal scale (a number score in which order matters, but the interval between scores does not) to a nominal scale (mutually exclusive labels: you like a movie or you don’t). While the nominal scale might make it easier for viewers to rate movies, it affects how viewers are represented and what content gets recommended by the ML system. As such, these choices can produce measurement bias, so careful consideration is necessary to understand its effects on system outcomes (Hardt & Barocas, 2017).

Secondly, based on gathered data  and available domain knowledge, practitioners engineer features and select model classes. Features  can be the available data attributes, transformations thereof based on knowledge and hypotheses, or generated in an automated fashion. Since each feature can be regarded as a model of attributes of the system or population under study, it is relevant to ask how representative it is as a proxy and why it may be predictive of the outcome. Models are used to make predictions based on features. A model class , with parameters , should be selected based on the complexity of the phenomenon in question and the amount and quality of the available data. Is the individual or object that is subject to the decision easily reduced to numbers or equations to begin with? What information in the data is inherently lost by virtue of the mapping having a limited complexity? The process of representation, abstraction and compression can be collectively described as inducing modeling bias. ML can be seen as a compression problem in which complex phenomena are stored as a pattern in a finite-dimensional parameter space. From an information theoretic perspective, modeling bias influences the extent to which distortion can be minimized when reconstructing a phenomenon from a compressed or sampled version of the original (Cover & Thomas, 2012; Dobbe et al., 2017).

Thirdly, label data is used to represent the output of the model. Training labels may be the actual outcome for historical cases, or some discretized or proxy version in cases where the actual outcomes cannot be measured or exactly quantified. Consider for example the use of records of arrest to predict crime rather than the facts of whether the crime was actually committed. How representative are such records of real crime across all subpopulations? What core information do they miss for representing the intended classes? And what bias lies hidden in them? We propose to refer to such issues as label bias.

Lastly, given a certain parameterization  and training data , a model is trained and tuned to optimize certain objectives. At this stage, various metrics may inform the model builder on where to tweak the model. Do we minimize the number of false positives or false negatives? In recidivism prediction, a false positive may be someone who incorrectly gets sentenced to prison, whereas a false negative poses a threat to safety by failing to recognize a high-threat individual. There are inherent trade-offs between prioritizing for equal prediction accuracy across groups versus for an equal likelihood of false positives and negatives across groups (Chouldechova, 2016; Kleinberg et al., 2017). Technical definitions of fairness are motivated by different metrics, illuminating the inherent ambiguity and context-dependence of such issues. For a given context, what is the right balance? And who gets to decide? We coin the effects of these trade-offs optimization bias.

The many questions posed above illuminate the range of places in the machine learning design process where issues of epistemology arise: they require justification and often value judgment. Our theory of knowledge and the way we formalize and solve problems determines how we represent and understand sensitive phenomena. How do we represent phenomena in ways that are deemed correct? What evidence is needed in order to justify an action or decision? What are legitimate classes or outcomes of a model? And how do we deal with inherent trade-offs of fairness? These challenges are deeply context-specific, often ethical, and challenge us to understand our epistemology and that of the domain we are working in.

The detrimental effects of overlooking these questions in practice are obvious in high-stakes domains, such as predictive policing and sentencing, where the decision to treat crime as a prediction problem reduces the perceived autonomy of individuals, fated to either commit a crime or act within the law. Barabas et al. argue that rather than prediction, “machine learning models should be used to surface covariates that are fed into a causal model for understanding the social, structural and psychological drivers of crime” (Barabas et al., 2018). This is a strong message with many challenges, but it points in the right direction: in these contexts, machine learning models should facilitate rather than replace the critical eye of the human expert. It forces practitioners and researchers to be humble and reflect on how our own skills and tools may benefit or hurt an existing decision-making process.

4 Emergent Bias Is About Dynamics

Complementing pre-existing and technical biases, “emergent bias arises only in a context of use by real users […] as a result of a change in societal knowledge, user population, or cultural values.” (Friedman, 1996). Recently, convincing examples of emergent bias have surfaced in contexts where ML is used to automate or mediate human decisions. In predictive policing, where discovered crime data (e.g., arrest records) are used to predict the location of new crimes and determine police deployment, runaway feedback loops can cause increasing surveillance of particular neighborhoods regardless of the true crime rate (Ensign et al., 2018), leading to over-policing of “high-risk” individuals (Stroud, 2016). In optimizing for attention, recommendation systems may have a tendency to turn towards the extreme and radical (Tufekci, 2018). When machine learning systems are unleashed in feedback with society, they may be more accurately described as reinforcement learning systems, performing feedback control (Recht, 2018). Therefore, a decision-making system has its own dynamics, which can be modified by feedback, potentially causing bias to accrue over time. To conceptualize these ideas at a high level, we adopt the system formulation depicted in Figure 1.

Figure 1: A Simple Feedback Model

The machine learning system acts on the environment through decisions, control actions, or interventions. From the environment, the decision maker considers observations, historical data, measurements and responses, conceivably updating its model in order to steer the environment in a beneficial direction. For example, in the case of predictive policing, ‘the environment’ describes a city and its citizens, and ‘the decision maker’ is the police department, which determines where to send police patrols or invest in social interventions.

The dynamical perspective offered by the conception of a feedback model allows for a focus on interactions, which can add clarity to debates over key issues like fairness and algorithmic accountability. Situations with completely different fairness interpretations may have identical static

observational metrics (properties of the joint distribution of input, model and output), and thus a causal or dynamic model is necessary to distinguish them 

(Hardt et al., 2016). On the other hand, a one-step feedback model, incorporating temporal indicators of well-being for individuals affected by decisions, offers a way of comparing competing definitions of fairness (Liu et al., 2018). Similarly, calls for “interpretability” and proposed solutions often omit key operative words – Interpretable to whom? And for what purpose? (Kohli et al., 2018). The dynamic viewpoint adds clarity to these questions by focusing on causes and effects of decision making systems, and situating interpretability in context.

Beyond providing a more realistic and workable frame of thinking about bias and related issues, the feedback system perspective may also allow inspiration to be drawn from areas of Systems Theory that have traditionally studied feedback and dynamics. For instance, the field of System Identification uses statistical methods to build mathematical models of dynamical systems from measured data, often to be employed to control dynamical systems with strict safety requirements, such as airplanes or electric power systems (Ljung, 1998; Åström & Murray, 2010). Inspiration may be drawn from the rich literature on closed-loop identification, which considers the identification of models with data gathered during operation, while the same model is also used to safeguard a system (Van den Hof, 1998). That said, modeling socio-technical systems is more challenging than engineered systems. The complexity of modeled phenomena, the role of unmodeled phenomena such as external economic factors, and slower temporal dynamics all pose barriers to directly applying existing engineering principles.

5 Our Positionality Shapes Our Epistemology

As ML practitioners and researchers, we are wired to analyze challenges in ways that abstract, formalize and reduce complexity. It is natural for us to think rigorously about technical roots of biases in the systems we design, and propose and techno-fixes to prevent negative impact from their proliferation. However, it is of crucial importance to acknowledge that the methods and approaches we use to reduce, formalize, and gather feedback from experiments are themselves inherent sources of bias. Epistemologies differ tremendously from application to application and ultimately shape the way a decision-maker justifies decisions and affects individuals. Technology intimately touches and embodies values deemed critical in employing the intended decision-making system. As such, we need to go beyond our formal tools and analyses to engage with others and reflect on our epistemology. In doing so, we aim to determine responsible ways in which technology can help put values into practice, and understand its fundamental limitations.

With a plethora of issues surfacing, it is easy to either consider banning ML altogether, or otherwise dismiss requests to fundamentally revisit its role in enabling data-driven decision-making in sensitive environments. Instead, we propose three principles to nourish debate on the middle ground:

1: Do fairness forensics (Crawford, 2017). Keep track of biases in an open and transparent way and engage in constructive dialogue with domain experts, to understand proven ways of formalizing complex phenomena and to breed awareness about how bias works and when/where users should be cautious.

2: Acknowledge that your positionality shapes your epistemology (Takacs, 2003). Our personal backgrounds, the training we received, the people we represent or interact with all have an impact on how we look at and formalize problems. As ML practitioners, we should set aside time and energy for critical self-reflection, to identify our own biases and blind spots, to harbor communication with the groups affected by the systems we design, and to understand where we should enrich our epistemology with other viewpoints.

3: Perform value-sensitive design. Determine what values are relevant in building a decision-making system and how they might be embodied or challenged in the design and implementation by engaging with users and other affected stakeholders (Van den Hoven, 2007; Friedman et al., 2013).

As Takacs describes it, the benefits of self-reflection go well beyond arriving at the “best solution” to a complex problem (Takacs, 2003, 2002). “This means learning to listen with open minds and hearts, learning to respect different ways of knowing the world borne of different identities and experiences, and learning to examine and re-examine one’s own worldviews. […] When we constantly engage to understand how our positionality biases our epistemology, we greet the world with respect, interact with others to explore and cherish their differences, and live life with a fuller sense of self as part of a web of community.”

As machine learning systems rapidly change our information gathering and shape our decisions and worldviews in ways we cannot fully anticipate, self-reflection and awareness of our epistemology becomes ever more important for machine learning practitioners and researchers to ensure that automated decision-making systems contribute in beneficial and sustainable ways.

Acknowledgements

We thank Moritz Hardt and Ben Recht for helpful comments and suggestions. This work is funded by a Tech for Social Good Grant from CITRIS and the Banatao Institute at UC Berkeley.

References