Curating Naturally Adversarial Datasets for Trustworthy AI in Healthcare

09/01/2023
by   Sydney Pugh, et al.
0

Deep learning models have shown promising predictive accuracy for time-series healthcare applications. However, ensuring the robustness of these models is vital for building trustworthy AI systems. Existing research predominantly focuses on robustness to synthetic adversarial examples, crafted by adding imperceptible perturbations to clean input data. However, these synthetic adversarial examples do not accurately reflect the most challenging real-world scenarios, especially in the context of healthcare data. Consequently, robustness to synthetic adversarial examples may not necessarily translate to robustness against naturally occurring adversarial examples, which is highly desirable for trustworthy AI. We propose a method to curate datasets comprised of natural adversarial examples to evaluate model robustness. The method relies on probabilistic labels obtained from automated weakly-supervised labeling that combines noisy and cheap-to-obtain labeling heuristics. Based on these labels, our method adversarially orders the input data and uses this ordering to construct a sequence of increasingly adversarial datasets. Our evaluation on six medical case studies and three non-medical case studies demonstrates the efficacy and statistical validity of our approach to generating naturally adversarial datasets

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2023

Evaluating the Robustness of Conversational Recommender Systems by Adversarial Examples

Conversational recommender systems (CRSs) are improving rapidly, accordi...
research
07/21/2022

Synthetic Dataset Generation for Adversarial Machine Learning Research

Existing adversarial example research focuses on digitally inserted pert...
research
04/16/2019

Reducing Adversarial Example Transferability Using Gradient Regularization

Deep learning algorithms have increasingly been shown to lack robustness...
research
03/23/2018

Generalizability vs. Robustness: Adversarial Examples for Medical Imaging

In this paper, for the first time, we propose an evaluation method for d...
research
02/17/2020

On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples

The increasing use of deep neural networks (DNNs) has motivated a parall...
research
03/22/2023

Wasserstein Adversarial Examples on Univariant Time Series Data

Adversarial examples are crafted by adding indistinguishable perturbatio...
research
08/22/2022

Real-world-robustness of tree-based classifiers

The concept of trustworthy AI has gained widespread attention lately. On...

Please sign up or login with your details

Forgot password? Click here to reset