Some HCI Priorities for GDPR-Compliant Machine Learning

03/16/2018
by   Michael Veale, et al.
0

In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems. Focussing on those areas that relate to algorithmic systems in society, we propose roles for HCI in legal contexts in relation to fairness, bias and discrimination; data protection by design; data protection impact assessments; transparency and explanations; the mitigation and understanding of automation bias; and the communication of envisaged consequences of processing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2018

Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?

As concerns about unfairness and discrimination in "black box" machine l...
research
07/24/2021

Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation

Machine learning systems based on minimizing average error have been sho...
research
07/16/2021

Learning to Limit Data Collection via Scaling Laws: Data Minimization Compliance in Practice

Data minimization is a legal obligation defined in the European Union's ...
research
07/12/2018

Algorithms that Remember: Model Inversion Attacks and Data Protection Law

Many individuals are concerned about the governance of machine learning ...
research
09/19/2023

EU law and emotion data

This article sheds light on legal implications and challenges surroundin...
research
04/15/2020

Hiring Fairly in the Age of Algorithms

Widespread developments in automation have reduced the need for human in...

Please sign up or login with your details

Forgot password? Click here to reset