DeepAI
Log In Sign Up

Audit, Don't Explain – Recommendations Based on a Socio-Technical Understanding of ML-Based Systems

07/21/2021
by   Hendrik Heuer, et al.
0

In this position paper, I provide a socio-technical perspective on machine learning-based systems. I also explain why systematic audits may be preferable to explainable AI systems. I make concrete recommendations for how institutions governed by public law akin to the German TÜV and Stiftung Warentest can ensure that ML systems operate in the interest of the public.

READ FULL TEXT

page 1

page 2

page 3

09/01/2021

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

Established approaches to assuring safety-critical systems and software ...
07/10/2019

On Designing Machine Learning Models for Malicious Network Traffic Classification

Machine learning (ML) started to become widely deployed in cyber securit...
05/10/2022

Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory

Understanding how ML models work is a prerequisite for responsibly desig...
03/17/2021

Characterizing Technical Debt and Antipatterns in AI-Based Systems: A Systematic Mapping Study

Background: With the rising popularity of Artificial Intelligence (AI), ...
12/05/2018

On Testing Machine Learning Programs

Nowadays, we are witnessing a wide adoption of Machine learning (ML) mod...
10/08/2019

TraffickCam: Explainable Image Matching For Sex Trafficking Investigations

Investigations of sex trafficking sometimes have access to photographs o...
03/19/2022

Data Smells: Categories, Causes and Consequences, and Detection of Suspicious Data in AI-based Systems

High data quality is fundamental for today's AI-based systems. However, ...