TrustGAN: Training safe and trustworthy deep learning models through generative adversarial networks

Deep learning models have been developed for a variety of tasks and are deployed every day to work in real conditions. Some of these tasks are critical and models need to be trusted and safe, e.g. military communications or cancer diagnosis. These models are given real data, simulated data or combination of both and are trained to be highly predictive on them. However, gathering enough real data or simulating them to be representative of all the real conditions is: costly, sometimes impossible due to confidentiality and most of the time impossible. Indeed, real conditions are constantly changing and sometimes are intractable. A solution is to deploy machine learning models that are able to give predictions when they are confident enough otherwise raise a flag or abstain. One issue is that standard models easily fail at detecting out-of-distribution samples where their predictions are unreliable. We present here TrustGAN, a generative adversarial network pipeline targeting trustness. It is a deep learning pipeline which improves a target model estimation of the confidence without impacting its predictive power. The pipeline can accept any given deep learning model which outputs a prediction and a confidence on this prediction. Moreover, the pipeline does not need to modify this target model. It can thus be easily deployed in a MLOps (Machine Learning Operations) setting. The pipeline is applied here to a target classification model trained on MNIST data to recognise numbers based on images. We compare such a model when trained in the standard way and with TrustGAN. We show that on out-of-distribution samples, here FashionMNIST and CIFAR10, the estimated confidence is largely reduced. We observe similar conclusions for a classification model trained on 1D radio signals from AugMod, tested on RML2016.04C. We also publicly release the code.

READ FULL TEXT

page 5

page 7

research
03/28/2020

DaST: Data-free Substitute Training for Adversarial Attacks

Machine learning models are vulnerable to adversarial examples. For the ...
research
07/10/2021

Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis

Deep learning models have become a popular choice for medical image anal...
research
03/11/2023

Generative Adversarial Networks for Scintillation Signal Simulation in EXO-200

Generative Adversarial Networks trained on samples of simulated or actua...
research
05/31/2022

Deep learning pipeline for image classification on mobile phones

This article proposes and documents a machine-learning framework and tut...
research
11/02/2020

Frequency-based Automated Modulation Classification in the Presence of Adversaries

Automatic modulation classification (AMC) aims to improve the efficiency...
research
06/08/2023

Conservative Prediction via Data-Driven Confidence Minimization

Errors of machine learning models are costly, especially in safety-criti...
research
03/31/2020

Prediction Confidence from Neighbors

The inability of Machine Learning (ML) models to successfully extrapolat...

Please sign up or login with your details

Forgot password? Click here to reset