Deep Learning and Ethics

05/24/2023
by   Travis LaCroix, et al.
0

This article appears as chapter 21 of Prince (2023, Understanding Deep Learning); a complete draft of the textbook is available here: http://udlbook.com. This chapter considers potential harms arising from the design and use of AI systems. These include algorithmic bias, lack of explainability, data privacy violations, militarization, fraud, and environmental concerns. The aim is not to provide advice on being more ethical. Instead, the goal is to express ideas and start conversations in key areas that have received attention in philosophy, political science, and the broader social sciences.

READ FULL TEXT
research
05/13/2020

Deep Learning for Political Science

Political science, and social science in general, have traditionally bee...
research
08/25/2023

Queering the ethics of AI

This book chapter delves into the pressing need to "queer" the ethics of...
research
03/27/2023

Philosophical Foundations of GeoAI: Exploring Sustainability, Diversity, and Bias in GeoAI and Spatial Data Science

This chapter presents some of the fundamental assumptions and principles...
research
05/13/2022

Grounding Explainability Within the Context of Global South in XAI

In this position paper, we propose building a broader and deeper underst...
research
05/05/2022

Artificial Intelligence and Structural Injustice: Foundations for Equity, Values, and Responsibility

This chapter argues for a structural injustice approach to the governanc...
research
12/21/2022

Introducing Political Ecology of Creative-Ai

This chapter introduces the perspective of political ecology to the appl...
research
11/18/2020

Redistricting Algorithms

Why not have a computer just draw a map? This is something you hear a lo...

Please sign up or login with your details

Forgot password? Click here to reset