Audit, Don't Explain – Recommendations Based on a Socio-Technical Understanding of ML-Based Systems

07/21/2021
by   Hendrik Heuer, et al.
0

In this position paper, I provide a socio-technical perspective on machine learning-based systems. I also explain why systematic audits may be preferable to explainable AI systems. I make concrete recommendations for how institutions governed by public law akin to the German TÜV and Stiftung Warentest can ensure that ML systems operate in the interest of the public.

READ FULL TEXT

page 1

page 2

page 3

research
09/01/2021

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

Established approaches to assuring safety-critical systems and software ...
research
05/10/2022

Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory

Understanding how ML models work is a prerequisite for responsibly desig...
research
04/04/2023

Socio-economic landscape of digital transformation public NLP systems: A critical review

The current wave of digital transformation has spurred digitisation refo...
research
07/10/2019

On Designing Machine Learning Models for Malicious Network Traffic Classification

Machine learning (ML) started to become widely deployed in cyber securit...
research
12/05/2018

On Testing Machine Learning Programs

Nowadays, we are witnessing a wide adoption of Machine learning (ML) mod...
research
10/08/2019

TraffickCam: Explainable Image Matching For Sex Trafficking Investigations

Investigations of sex trafficking sometimes have access to photographs o...
research
05/10/2023

Technical Understanding from IML Hands-on Experience: A Study through a Public Event for Science Museum Visitors

While AI technology is becoming increasingly prevalent in our daily live...

Please sign up or login with your details

Forgot password? Click here to reset