On the Fairness of 'Fake' Data in Legal AI

09/10/2020
by   Lauren Boswell, et al.
The University of Sydney
0

The economics of smaller budgets and larger case numbers necessitates the use of AI in legal proceedings. We examine the concept of disparate impact and how biases in the training data lead to the search for fairer AI. This paper seeks to begin the discourse on what such an implementation would actually look like with a criticism of pre-processing methods in a legal context . We outline how pre-processing is used to correct biased data and then examine the legal implications of effectively changing cases in order to achieve a fairer outcome including the black box problem and the slow encroachment on legal precedent. Finally we present recommendations on how to avoid the pitfalls of pre-processed data with methods that either modify the classifier or correct the output in the final step.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/02/2018

Human Indignity: From Legal AI Personhood to Selfish Memes

It is possible to rely on current corporate law to grant legal personhoo...
09/29/2020

Legal Judgment Prediction (LJP) Amid the Advent of Autonomous AI Legal Reasoning

Legal Judgment Prediction (LJP) is a longstanding and open topic in the ...
03/08/2022

An Uncommon Task: Participatory Design in Legal AI

Despite growing calls for participation in AI design, there are to date ...
08/26/2019

A Legal Definition of AI

When policy makers want to regulate AI, they must first define what AI i...
09/21/2021

Identifying biases in legal data: An algorithmic fairness perspective

The need to address representation biases and sentencing disparities in ...
03/02/2018

PRESISTANT: Learning based assistant for data pre-processing

Data pre-processing is one of the most time consuming and relevant steps...
08/19/2021

The Legislative Recipe: Syntax for Machine-Readable Legislation

Legal interpretation is a linguistic venture. In judicial opinions, for ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Artificial intelligence utilized in the legal system, and governance generally, is no novel concept in 2020. The economics of smaller budgets and larger case numbers means that it is more and more tempting for governments to look to automated solutions for decision-making processes [Zavrnik2020]. Some states in the United States have deployed AI to calculate risk-assessments in the criminal justice system [epic]. Australia has its sights on moving towards almost complete automatic processing of low-risk visa applications [visa]. And where AI is not implemented, government agencies are developing umbrella ethical infrastructures for its imminent arrival [NZAI, EUAI, CSIRO, UNESCO].

But championing efficiency and cost-reduction through the implementation of AI is not without its shortfalls. The well-documented case of racial bias in risk-assessment algorithms used to guide judges when determining bail and sentencing in the United States sounded the alarm that if AI is to take on a quasi-judicial role in the legal system it must be responsible, and it must be fair [doyle_barabas_dinakar_2019, propublica].

It seems that there is an institutional demand that where AI is used in government, it must be transparent, explainable, and fair [NZAI, EUAI, CSIRO, UNESCO]

. But the cross-over of AI from a proprietary use to governmental implementation is still nascent, so the ethical discussion is just that––a discussion. Arguably, this paper is also guilty. Conversely, in the machine learning community the discourse revolves around satisfying various statistical definitions of fairness, many of which are competing or are mutually exclusive. The practical considerations of AI implementation at a societal level is generally relegated to an afterthought behind the empirical results. The axioms of statistical fairness and the axioms of legal fairness seemingly exist in two separate vacuums unaware of each other.

This paper wants to begin the discussion of what the practical implementation of fair AI in the legal system looks like. In particular, we examine a popular class of methods that achieve fairness by modifying the original data. These methods have appeared in the most elite machine learning journals in recent years and continue to propagate the discourse on fair AI [Calmon2017, Kamiran2009, johndrow2019]. Our intention is to investigate these methods and examine if they are even compatible with principles such as the rule of law. First, we will introduce the concept of disparate impact and explain why pre-processing methods are needed. Then we explain two of the most influential methods, data massaging and otimised pre-processing, before briefly highlighting some of the more recent developments using SMOTE and Optimal Transport Theory. Finally, we aim to balance these pre-processing methods against common law values in a general context.

2 Disparate Impact

The motivation for fair machine learning is because of disparate treatment and disparate impact. Disparate treatment refers to the explicit discrimination of a protected class. Far more insidious is disparate impact, which occurs when a protected class is discriminated against indirectly [Feldman2015, 1]. When decisions are made by an algorithm, especially a black box, these kinds of biases can be difficult to uncover. Unfortunately, the data mining process itself is fraught with a number of traps that inevitably lead to the disparate impact. Issues in data collection, data labelling and proxy variables [Barocas2016, 677-693] can all cause members of a protected class to suffer unintentional discrimination.

For example, predictive policing can be a source of biased training data, as it satisfies the notion of a self-fulfilling prophecy [dunkelaufairness, 5]. As certain areas are targeted for more policing, more crime is found in these areas. As a result, features associated with these areas become more prevalent in the data, which in turn informs the algorithm of where crime is to be found. This cycle amplifies prejudicial practices and can even serve as historical justification for future discrimination.

Proxy variables are particularly difficult as they introduce systematic bias even when the protected features in question are removed. For example, an algorithm that is not permitted to use race as a variable in its decision but is permitted to use a zip code still introduces disparate impact, because while it removes obvious racial bias, it fails to appreciate how a zip code stratifies socio-demographic groups and therefore implicitly reintroduces the protected race variable back into the data Prince2020. More generally, proxies occur when the criteria of making a genuine decision also happens to be indicative of class membership [691]Barocas2016.

Continuing with the predictive policing example, it also highlights the issue of proxy variables. For example, the Los Angeles Police Department uses a point system to identify the worst offenders [selbst2017disparate][138]. The LAPD adds a point to a person’s profile per police contact, leading to the very same feedback loop, particularly in minority neighbourhoods. While neighbourhood is not directly a legally protected variable, it serves as a proxy for racial or socio-demographics [dunkelaufairness, 5], meaning that the any algorithm can implicitly learn that a person of color is more likely to be a criminal [selbst2017disparate, 138].

These issues, particularly of proxy variables, motivate the need for more sophisticated methods to handle disparate impact. The idea of pre-processing is to actually remove the underlying biases from the training data. This repaired training data is then used to train the machine learning model resulting in a fair classifier.

3 Pre-processing

In this context, we refer to pre-processing as the modification of the training data before it is used to train some predictive model. The motivation for such a step is simple; given the initial data set may already be biased, why not repair this data by removing this bias and then train the model on this modified repaired data set. Below we outline two of the most cited pre-processing methods published in NIPS [Calmon2017] and ICCCM[Kamiran2009] to combat these implicit biases. Although a fuller review of literature can be found in the surveys of friedler2019 and dunkelaufairness, we will attempt to make this technical jargon more accessible to a broader audience by introducing fictitious, rudimentary examples to explain the algorithmic yields in the spirit of algorithmic transparency. These examples seek to highlight the tension between group fairness

; the probability for an individual to be assigned the favourable outcome to be equal across the privileged and unprivileged group,

individual fairness; similar groups need to be treated similarly and compatibility with the common law.

3.1 Massaging

The early work of [Kamiran2009] seeks to massage the training data to create an unbiased data set by flipping the labels of certain data points in order to satisfy group fairness. The authors find that group fairness is improved with minimal impact on accuracy.

Input : Data, Class Label, Protected Attribute
1 Rank each sample relative to how likely it is to be in the positive class; Create a descending ordered list of the samples that were in the protected class and true label was negative, pr; Create an ascending ordered list of the samples that were not in protected class and true label was positive, dem; Calculate M, the minimum number of modifications needed while  do
2       Select the top object in pr and flip the label; Select the top object in dem and flip the label; remove the top elements from both pr and dem; increment
3 end while
Output : Massaged data
Algorithm 1 High Level Algorithm of Kamiran2009

This algorithm in a legal context is actually relabelling the data. To illustrate massaging, we introduce our example of a new case to be classified, Plato, who is a not a member of a protected class.

In trying to satisfy the principle of individual fairness, paradoxically, individual fairness is actually violated. Aristotelian Equality tells us that like cases should be treated alike. In the original data, Plato could have been compared to a set of cases, S, that were classified with the positive outcome. However, after the data is massaged, the same set of cases, S, have their labels switched to the negative outcome. As a result, Plato suffers the negative classification.

3.2 Optimised Pre-processing

More recently Calmon2017 formulated an optimisation to transform the dataset into . Where

is the feature vector,

is the label vector and is the vector of protected features using a randomised mapping of

This is done by solving for the following problem:

is a valid distribution

Here is a dissimilarity measure between distributions to be minimised. is an arbitrary distance measure 111enforcing the constraint that the distributions over outcomes, conditioned on protected attributes should be as similar to each other as possible, is a measure of distortion 222enforcing the constraint that any individuals information should not change much. This kind of formulation is unique as it addresses the problem of disparate impact as an optimisation problem and acknowledges the trade off between different kinds of fairness.

The key issue here is that is a parameter for controlling distortion. This means for any value of , it is permissible to have small perturbations in this new transformed data set, which is then used to train the model. It is also unclear how to tune . In practice, it seems like an implementor of this method could arbitrarily set the value without any legal oversight.

In optimized pre-processing, Plato, is a member of a protected class. Suppose Plato’s case turns on a particular set of facts, such as a number of previous convictions. Again, in trying to satisfy individual fairness, individual fairness is violated. In the original data , Plato could have been compared to a set of cases, , that had similar counts of previous convictions. Recall is the vector of facts of the cases, is the vector of outcomes of the cases, and is the vector the protected features. The pre-processing is applied to remove the presence of from the data to combat disparate impact. This creates a new randomised dataset . After the data is pre-processed, Plato would now be compared to a new set of cases which have been randomly generated 333Albeit with statistical justification to satisfy the above optimisation. When is used to train the algorithm, it means that these modified facts, like the number of previous convictions, were used to determine the outcome of Plato’s case. These facts never existed outside of the algorithm. Furthermore, the choice of the value of has the ability to greatly affect the outcomes of cases if not selected carefully, and the fact that no guidance is offered by the authors on how to choose the value of is concerning.

3.3 Other Methods

Pre-processing biased data continues to be a popular area in the machine learning community, particularly using optimal transport theory. pmlr-v97-gordaliza19a propose a method to remap the training data to a modified such that the conditional distribution with respect to the protected attribute is the same for different values of . In similar work, johndrow2019 also use transport theory to transform the original dataset such that all information about a potential protected variable is removed by actually achieving pairwise independence between each and protected variable . More recently, Wang2019 aims to improve a classifier by learning the counterfactual distributions in the data, and then perturb the data to minimise disparate impact on non-baseline groups. "For example, if is the number of prior arrests and is race, a pairwise adjustment would result in a race-adjusted measure of the number of prior arrests" [Wang2019, 8] . For Plato, this means a similar problem as section 3.2.

In other recent work, simply generating pseudo-instances of minority classes using SMOTE has also been shown to give promising results [iosifidis2018dealing]. For Plato, this would mean potentially being compared to cases that are completely randomly generated, both the facts and outcomes - exacerbating the problems from both sections 3.1 and 3.2.

Further summaries of pre-processing methods can be found in the works of friedler2019 and dunkelaufairness.

4 Legal & Social Considerations

In preparing an argument against pre-processing, it was difficult to build an argument with relevant jurisprudence, because the idea of repairing data, i.e. changing the facts or decisions of cases extra-judicially, is so absurd that it is unlikely to be a concept in need of a theoretical framework. Abdul Paliwala, of the University of Warwick, is critical of the inadequate jurisprudence in the age of AI, stating that

"Without a proper awareness of key issues, it is possible that these systems will either replicate past failures or result in systems which though successful in a technical sense produce results which not advance the need for proper legal development"[paliwala_2016].

The authors of this paper tend to agree, but will advance a very rudimentary legal critique of pre-processing methods, which as the jurisprudence in this area progresses, might later become a more focussed discussion on why pre-processing is counter-intuitive to the current notions of legal ethics.

One of the most obvious criticisms of the pre-processing methods discussed above is if there is even any legality to repairing data used in AIs by machine learning practitioners. In common law jurisdictions, it is antithetical to so much as contemplate the idea that once a matter is decided, it is liable to be changed without proceeding through the proper mechanisms of appeal. So much so, that it goes against a Dicey conception of the rule of law, which is a set of principles on how governments should govern[jowell_oliver]. If previous decisions are potentially altered by an AI to achieve empirical fairness, people accessing the judicial system might struggle to understand the case they must meet if previous decisions are altered at random in an endless pursuit of statistically-perceived fairness. There is no certainty of the law[jowell_oliver], because the repaired data could hypothetically be the reason a case is decided in the negative if AIs are heavily relied on within the judicial decision, be it a sole decision-maker or as supplementary tool for judges.

While AIs have the ability to satisfy the ’efficiency’ aim of the rule of law, it appears to come at the cost of accountability. Where machine learning practitioners possess the ability to perturb legal data to accomplish a perceived measure of fairness, they do so in a language not understood by the body politic. Where the machine learning practitioners, for all intents and purposes, quasi-make the rules by repairing data, they should be subjected to the scrutiny of those whose fidelity is paramount[11]jowell_oliver. It does not follow that these solutions should be implemented obscured or without inquiry, or they will directly contravene the rule of law.

Adrian Zuckerman, Emeritus professor of civil procedure at the University of Oxford purports that, “AI decision-making may lead to an ever-widening gulf between machine law and human conceptions of justice and morality, to the point where legal institutions cease to command loyalty and legitimacy”[zuckermann_2020]. The authors of this paper agree. If pre-processing methods are utilized in AIs making decisions, they have the power to make legal change. By this we do not imply that there will be instantaneous chaotic reversals of long-venerated precedent. Rather, the repaired data on which the AI has trained on will become part of the infamous ’black box,’ and it will be nearly impossible to know when a decision-making AI is presented with a matter for decision whether or not the turning point of the AI’s outcome was based on repaired data or the original data. We posit that the repaired data will create incremental, perhaps imperceptible changes to precedent at first. And as AIs advance and humans become more trustful of them, the judicial system might be likely to defer to an AI’s judgment, acquiescing either knowingly or unknowingly to a shift from the common law’s axiom of ’fairness as morality’ to an empirical axiom.

Finally, we have concerns over how such pre-processing methods might be received in the public discourse. Fake news already competes with reality in the current zeitgeist. Cognitive dissonance paves the way for charged emotional appeals and attacks on feelings and beliefs [kaplan_2019]. In an era where new AI innovations such as deep fakes and language models are capable of exacerbating the problem [Caldwell2020], we believe that pre-possessing methods may get caught up in the hysteria. The very notion of deciding fates or affecting rights based off of fake data is a novel ’Black Mirror-esque’ idea and such stories are primed to be sensationalised causing even more distrust in the legal machinery.

5 Recommendations

The old adage, "just because you can, doesn’t mean you should" might be best followed in this context, but in the age of AI, it is unrealistic to preclude AI from advancing into the legal system. We do not think fair AI is not possible in a legal context––we are just hesitant to recommend pre-processing methods as the means to a fair end. Methods such as in-processing and post-processing offer useful, more transparent alternatives to achieving fairer AI.

In-processing refers to the technique of modifying the algorithm so that it can handle a biased dataset. In particular, we endorse methods that leave the training data unchanged and instead opt to use regularisation. In this way, it becomes explicit that the algorithm is actually penalised for taking protected features into account. For example, kaishima2012 modify a logistic regression and add a penalty to the loss function in order to reduce discrimination learned by the model which "enforces a classifier’s independence from sensitive information"

[kaishima2012, 15]. Another interesting implementation of in-processing is an adversarial neural network [beutel2017data]

. Here two neural networks are used with a share hidden layer. One,

A, learns to predict the outcome, the other, B, learns to predict the value of the protected variable. By maximising the objective function of A and minimising the objective function of B, the algorithm is discouraged from using the protected variables. Furthermore, the formulation of the objective function makes explicit the trade-off between accuracy and fairness.

Post-processing methods take a possible biased classifier, trained on a possibly biased dataset and correct the output. kamiran2012decision designs a method that operates on cases that are close to the decision boundary of an algorithm. Simply, if a case is close to the boundary, and from the unprivileged class, the case receives the positive outcome (vice versa for the prevailed class). In the same paper, kaishima2012 also proposes an ensemble based approach where a number of classifiers are used to make a decision. If all classifiers unanimously agree, then the decision is final. If at least one classifier disagrees, the unprivileged case receives the positive outcome while the prevailed class would receive the unfavorable outcome.

In our opinion, both in-processing and post-processing offer reasonable alternatives to pre-processing. This is because the aforementioned methods do not modify the data and hence do not contribute to the incremental, perhaps imperceptible changes to precedent. In the case of in-processing, the regularisation term is a more explicit formulation for enforcing the classifier’s avoidance of protected features. Post-processing basically offers a safety net that is able to catch problematic cases and, given careful consideration, is able to make the transparent corrections.

The difference between pre-processing, in-processing and post-processing methods is subtle, and often times methods are amalgamated together into a fairness soup, adding to the black box problem. More legal and technical scrutiny is needed, which is beyond the scope of our discussion. However, we believe that careful choices of in-processing and post-processing are worthy areas of investigation in the pursuit of fairer AI.

Given that the economics dictate that AI play a larger role in judicial proceedings, we also have a set of general recommendations. We believe that AI should only be used in instances of positive rights, such as making decisions on the granting of licenses or travel visas. And if an AI is employed to make decisions that might affect a party to a claim, it should be constrained to small civil claims where the detriment to a party is nominal.

But if AI is implemented to make decisions judicially or extra-judicially, there should be express legislation to implement these AIs and their functions. We strongly believe that this issue should be accessible to the body politic, which would require thorough public consultation and positive engagement between legislators and voters to raise awareness around the issues of AI utilised in the legal system.

In this vein, we propose a possible system in which a decision-maker can be augmented by the AI. First, someone’s matter would come before the court or decision-making body. The person must consent to the use of an AI for decision-making; if they do not, the decision will be made by a human.

If the person consents to the AI and if the matter receives a positive outcome, by whatever algorithm is implemented, the outcome stands. If the case receives the negative outcome, it is re-evaluated by a human. The key here is that the case re-evaluator must not know that the matter has received the negative outcome from the algorithm to avoid any implicit bias that might be imputed from knowledge of the AI’s decision. This can be achieved by simply adding the matter back into the pool of the people who did not consent. By carefully tuning the number of cases ’allowed’ to be decided by the AI

444which would be some fraction of the total number of matters, it would be possible not to over-load the human re-evaluators with negative cases from the algorithm and a blind re-evaluation is possible.

6 Concluding Remarks

It is unlikely that AI will replace highest court justices. AI may never be able to have the ’intelligence’ to piece together the ontologies of the legal system, or understand the contentious nature of individualised justice and Benthamite utilitarian public welfare considerations, to sufficiently serve its needs. But it is obtuse to assume that machine learning practitioners will not be up for the challenge. And where governments are looking towards leaner budgets and technology proprietors are tasked with that tall order, lawyers and policy makers must be able to keep pace. It is impractical to envisage a new generation of legal tech gurus and data science lawyers, but the two must not remain uninformed of each other if the two purport to influence the future of legal AI. It is imperative that the legal profession mobilise to figure out the questions to ask when it comes to legal AI, and we hope with this paper, we make these issues more accessible to both the legal and machine learning communities.