Mapping for accessibility: A case study of ethics in data science for social good
Ethics in the emerging world of data science are often discussed through cautionary tales about the dire consequences of missteps taken by high profile companies or organizations. We take a different approach by foregrounding the ways that ethics are implicated in the day-to-day work of data science, focusing on instances in which data scientists recognize, grapple with, and conscientiously respond to ethical challenges. This paper presents a case study of ethical dilemmas that arose in a "data science for social good" (DSSG) project focused on improving navigation for people with limited mobility. We describe how this particular DSSG team responded to those dilemmas, and how those responses gave rise to still more dilemmas. While the details of the case discussed here are unique, the ethical dilemmas they illuminate can commonly be found across many DSSG projects. These include: the risk of exacerbating disparities; the thorniness of algorithmic accountability; the evolving opportunities for mischief presented by new technologies; the subjective and value- laden interpretations at the heart of any data-intensive project; the potential for data to amplify or mute particular voices; the possibility of privacy violations; and the folly of technological solutionism. Based on our tracing of the team's responses to these dilemmas, we distill lessons for an ethical data science practice that can be more generally applied across DSSG projects. Specifically, this case experience highlights the importance of: 1) Setting the scene early on for ethical thinking 2) Recognizing ethical decision-making as an emergent phenomenon intertwined with the quotidian work of data science for social good 3) Approaching ethical thinking as a thoughtful and intentional balancing of priorities rather than a binary differentiation between right and wrong.
READ FULL TEXT