Interpreting Social Respect: A Normative Lens for ML Models

08/01/2019
by   Ben Hutchinson, et al.
0

Machine learning is often viewed as an inherently value-neutral process: statistical tendencies in the training inputs are "simply" used to generalize to new examples. However when models impact social systems such as interactions between humans, these patterns learned by models have normative implications. It is important that we ask not only "what patterns exist in the data?", but also "how do we want our system to impact people?" In particular, because minority and marginalized members of society are often statistically underrepresented in data sets, models may have undesirable disparate impact on such groups. As such, objectives of social equity and distributive justice require that we develop tools for both identifying and interpreting harms introduced by models.

READ FULL TEXT
research
06/18/2012

Machine Learning that Matters

Much of current machine learning (ML) research has lost its connection t...
research
06/15/2020

Societal biases reinforcement through machine learning: A credit scoring perspective

Does machine learning and AI ensure that social biases thrive ? This pap...
research
07/25/2019

HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop

While the role of humans is increasingly recognized in machine learning ...
research
05/09/2023

Algorithms as Social-Ecological-Technological Systems: an Environmental Justice Lens on Algorithmic Audits

This paper reframes algorithmic systems as intimately connected to and p...
research
08/12/2020

Inference of a universal social scale and segregation measures using social connectivity kernels

How people connect with one another is a fundamental question in the soc...
research
05/15/2018

Prediction of Facebook Post Metrics using Machine Learning

In this short paper, we evaluate the performance of three well-known Mac...
research
09/01/2023

Let the Models Respond: Interpreting Language Model Detoxification Through the Lens of Prompt Dependence

Due to language models' propensity to generate toxic or hateful response...

Please sign up or login with your details

Forgot password? Click here to reset