Data augmentation and explainability for bias discovery and mitigation in deep learning

This dissertation explores the impact of bias in deep neural networks and presents methods for reducing its influence on model performance. The first part begins by categorizing and describing potential sources of bias and errors in data and models, with a particular focus on bias in machine learning pipelines. The next chapter outlines a taxonomy and methods of Explainable AI as a way to justify predictions and control and improve the model. Then, as an example of a laborious manual data inspection and bias discovery process, a skin lesion dataset is manually examined. A Global Explanation for the Bias Identification method is proposed as an alternative semi-automatic approach to manual data exploration for discovering potential biases in data. Relevant numerical methods and metrics are discussed for assessing the effects of the identified biases on the model. Whereas identifying errors and bias is critical, improving the model and reducing the number of flaws in the future is an absolute priority. Hence, the second part of the thesis focuses on mitigating the influence of bias on ML models. Three approaches are proposed and discussed: Style Transfer Data Augmentation, Targeted Data Augmentations, and Attribution Feedback. Style Transfer Data Augmentation aims to address shape and texture bias by merging a style of a malignant lesion with a conflicting shape of a benign one. Targeted Data Augmentations randomly insert possible biases into all images in the dataset during the training, as a way to make the process random and, thus, destroy spurious correlations. Lastly, Attribution Feedback is used to fine-tune the model to improve its accuracy by eliminating obvious mistakes and teaching it to ignore insignificant input parts via an attribution loss. The goal of these approaches is to reduce the influence of bias on machine learning models, rather than eliminate it entirely.

READ FULL TEXT

page 33

page 36

research
08/22/2023

Targeted Data Augmentation for bias mitigation

The development of fair and ethical AI systems requires careful consider...
research
08/22/2023

A survey on bias in machine learning research

Current research on bias in machine learning often focuses on fairness, ...
research
05/27/2019

Style transfer-based image synthesis as an efficient regularization technique in deep learning

These days deep learning is the fastest-growing area in the field of Mac...
research
10/04/2020

Improving Lesion Detection by exploring bias on Skin Lesion dataset

All datasets contain some biases, often unintentional, due to how they w...
research
10/23/2020

Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures

Existing NLP datasets contain various biases, and models tend to quickly...
research
11/05/2020

Investigating Societal Biases in a Poetry Composition System

There is a growing collection of work analyzing and mitigating societal ...
research
05/05/2020

Global explanations for discovering bias in data

In the paper, we propose attention-based summarized post-hoc explanation...

Please sign up or login with your details

Forgot password? Click here to reset