Auditing and Generating Synthetic Data with Controllable Trust Trade-offs

04/21/2023
by   Brian Belgodere, et al.
9

Data collected from the real world tends to be biased, unbalanced, and at risk of exposing sensitive and private information. This reality has given rise to the idea of creating synthetic datasets to alleviate risk, bias, harm, and privacy concerns inherent in the real data. This concept relies on Generative AI models to produce unbiased, privacy-preserving synthetic data while being true to the real data. In this new paradigm, how can we tell if this approach delivers on its promises? We present an auditing framework that offers a holistic assessment of synthetic datasets and AI models trained on them, centered around bias and discrimination prevention, fidelity to the real data, utility, robustness, and privacy preservation. We showcase our framework by auditing multiple generative models on diverse use cases, including education, healthcare, banking, human resources, and across different modalities, from tabular, to time-series, to natural language. Our use cases demonstrate the importance of a holistic assessment in order to ensure compliance with socio-technical safeguards that regulators and policymakers are increasingly enforcing. For this purpose, we introduce the trust index that ranks multiple synthetic datasets based on their prescribed safeguards and their desired trade-offs. Moreover, we devise a trust-index-driven model selection and cross-validation procedure via auditing in the training loop that we showcase on a class of transformer models that we dub TrustFormers, across different modalities. This trust-driven model selection allows for controllable trust trade-offs in the resulting synthetic data. We instrument our auditing framework with workflows that connect different stakeholders from model development to audit and certification via a synthetic data auditing report.

READ FULL TEXT

page 1

page 2

page 7

page 11

page 18

page 42

research
08/24/2021

Bias Mitigated Learning from Differentially Private Synthetic Data: A Cautionary Tale

Increasing interest in privacy-preserving machine learning has led to ne...
research
05/03/2019

In Defense of Synthetic Data

Synthetic datasets have long been thought of as second-rate, to be used ...
research
02/03/2023

Private, fair and accurate: Training large-scale, privacy-preserving AI models in radiology

Artificial intelligence (AI) models are increasingly used in the medical...
research
11/12/2022

TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data

Personal data collected at scale promises to improve decision-making and...
research
04/13/2022

Enabling Synthetic Data adoption in regulated domains

The switch from a Model-Centric to a Data-Centric mindset is putting emp...
research
07/09/2023

On the Challenges of Deploying Privacy-Preserving Synthetic Data in the Enterprise

Generative AI technologies are gaining unprecedented popularity, causing...
research
04/01/2021

Holdout-Based Fidelity and Privacy Assessment of Mixed-Type Synthetic Data

AI-based data synthesis has seen rapid progress over the last several ye...

Please sign up or login with your details

Forgot password? Click here to reset