Harms from Increasingly Agentic Algorithmic Systems

02/20/2023
by   Alan Chan, et al.
0

Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm, in domains as diverse as health care, finance, policing, and recommendations. Much work remains to be done to mitigate the serious harms of these systems, particularly those disproportionately affecting marginalized communities. Despite these ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms and the creation of novel ones. In response, the FATE community has emphasized the importance of anticipating harms. Our work focuses on the anticipation of harms from increasingly agentic systems. Rather than providing a definition of agency as a binary property, we identify 4 key characteristics which, particularly in combination, tend to increase the agency of a given algorithmic system: underspecification, directness of impact, goal-directedness, and long-term planning. We also discuss important harms which arise from increasing agency – notably, these include systemic and/or long-range impacts, often on marginalized stakeholders. We emphasize that recognizing agency of algorithmic systems does not absolve or shift the human responsibility for algorithmic harms. Rather, we use the term agency to highlight the increasingly evident fact that ML systems are not fully under human control. Our work explores increasingly agentic algorithmic systems in three parts. First, we explain the notion of an increase in agency for algorithmic systems in the context of diverse perspectives on agency across disciplines. Second, we argue for the need to anticipate harms from increasingly agentic systems. Third, we discuss important harms from increasingly agentic systems and ways forward for addressing them. We conclude by reflecting on implications of our work for anticipating algorithmic harms from emerging systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2021

Who Gets What, According to Whom? An Analysis of Fairness Perceptions in Service Allocation

Algorithmic fairness research has traditionally been linked to the disci...
research
04/28/2023

Understanding accountability in algorithmic supply chains

Academic and policy proposals on algorithmic accountability often seek t...
research
06/09/2022

Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance

Much attention has focused on algorithmic audits and impact assessments ...
research
11/26/2020

Overcoming Failures of Imagination in AI Infused System Development and Deployment

NeurIPS 2020 requested that research paper submissions include impact st...
research
04/02/2020

Best Practices for Transparency in Machine Generated Personalization

Machine generated personalization is increasingly used in online systems...
research
06/03/2022

The Algorithmic Imprint

When algorithmic harms emerge, a reasonable response is to stop using th...
research
05/09/2023

Algorithms as Social-Ecological-Technological Systems: an Environmental Justice Lens on Algorithmic Audits

This paper reframes algorithmic systems as intimately connected to and p...

Please sign up or login with your details

Forgot password? Click here to reset