Machine Decisions and Human Consequences

11/16/2018
by   Teresa Scantamburlo, et al.
4

As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but draws on similar situations emerging from other algorithms utilised in controlling access to opportunities, to explain how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or 'classifiers'. It examines the key aspects of the performance of classifiers, including how classifiers learn, the fact that they operate on the basis of correlation rather than causation, and that the term 'bias' in machine learning has a different meaning to common usage.An example of a real world 'classifier', the Harm Assessment Risk Tool (HART), is examined, through identification of its technical features: the classification method, the training data and the test data, the features and the labels, validation and performance measures. Four normative benchmarks are then considered by reference to HART: (a) prediction accuracy (b) fairness and equality before the law (c) transparency and accountability (d) informational privacy and freedom of expression, in order to demonstrate how its technical features have important normative dimensions that bear directly on the extent to which the system can be regarded as a viable and legitimate support for, or even alternative to, existing human decision-makers.

READ FULL TEXT
research
12/21/2021

A Pilot Study on Detecting Unfairness in Human Decisions With Machine Learning Algorithmic Bias Detection

Fairness in decision-making has been a long-standing issue in our societ...
research
04/12/2021

Understanding Prediction Discrepancies in Machine Learning Classifiers

A multitude of classifiers can be trained on the same data to achieve si...
research
10/08/2020

Assessing the Fairness of Classifiers with Collider Bias

The increasing maturity of machine learning technologies and their appli...
research
10/15/2021

Using Psychological Characteristics of Situations for Social Situation Comprehension in Support Agents

Support agents that help users in their daily lives need to take into ac...
research
11/19/2019

"The Human Body is a Black Box": Supporting Clinical Decision-Making with Deep Learning

Machine learning technologies are increasingly developed for use in heal...
research
07/01/2022

Learning Classifier Systems for Self-Explaining Socio-Technical-Systems

In socio-technical settings, operators are increasingly assisted by deci...
research
12/17/2021

Learning from Heterogeneous Data Based on Social Interactions over Graphs

This work proposes a decentralized architecture, where individual agents...

Please sign up or login with your details

Forgot password? Click here to reset