DeepAI AI Chat
Log In Sign Up

Proposing an Interactive Audit Pipeline for Visual Privacy Research

by   Jasmine DeHart, et al.
The University of Oklahoma
Carnegie Mellon University

In an ideal world, deployed machine learning models will enhance our society. We hope that those models will provide unbiased and ethical decisions that will benefit everyone. However, this is not always the case; issues arise during the data preparation process throughout the steps leading to the models' deployment. The continued use of biased datasets and processes will adversely damage communities and increase the cost of fixing the problem later. In this work, we walk through the decision-making process that a researcher should consider before, during, and after a system deployment to understand the broader impacts of their research in the community. Throughout this paper, we discuss fairness, privacy, and ownership issues in the machine learning pipeline; we assert the need for a responsible human-over-the-loop methodology to bring accountability into the machine learning pipeline, and finally, reflect on the need to explore research agendas that have harmful societal impacts. We examine visual privacy research and draw lessons that can apply broadly to artificial intelligence. Our goal is to systematically analyze the machine learning pipeline for visual privacy and bias issues. We hope to raise stakeholder (e.g., researchers, modelers, corporations) awareness as these issues propagate in this pipeline's various machine learning phases.


page 3

page 7

page 8


Modeling Techniques for Machine Learning Fairness: A Survey

Machine learning models are becoming pervasive in high-stakes applicatio...

A Framework for Understanding Unintended Consequences of Machine Learning

As machine learning increasingly affects people and society, it is impor...

Some Ethical Issues in the Review Process of Machine Learning Conferences

Recent successes in the Machine Learning community have led to a steep i...

Taking Ethics, Fairness, and Bias Seriously in Machine Learning for Disaster Risk Management

This paper highlights an important, if under-examined, set of questions ...

Mitigating dataset harms requires stewardship: Lessons from 1000 papers

Concerns about privacy, bias, and harmful applications have shone a ligh...

It's Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process

The computing research community needs to work much harder to address th...

HARK Side of Deep Learning -- From Grad Student Descent to Automated Machine Learning

Recent advancements in machine learning research, i.e., deep learning, i...