How Much Can We Really Trust You? Towards Simple, Interpretable Trust Quantification Metrics for Deep Neural Networks

09/12/2020
by   Alexander Wong, et al.
0

A critical step to building trustworthy deep neural networks is trust quantification, where we ask the question: How much can we trust a deep neural network? In this study, we take a step towards simple, interpretable metrics for trust quantification by introducing a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions. We conduct a thought experiment and explore two key questions about trust in relation to confidence: 1) How much trust do we have in actors who give wrong answers with great confidence? and 2) How much trust do we have in actors who give right answers hesitantly? Based on insights gained, we introduce the concept of question-answer trust to quantify trustworthiness of an individual answer based on confident behaviour under correct and incorrect answer scenarios, and the concept of trust density to characterize the distribution of overall trust for an individual answer scenario. We further introduce the concept of trust spectrum for representing overall trust with respect to the spectrum of possible answer scenarios across correctly and incorrectly answered questions. Finally, we introduce NetTrustScore, a scalar metric summarizing overall trustworthiness. The suite of metrics aligns with past social psychology studies that study the relationship between trust and confidence. Leveraging these metrics, we quantify the trustworthiness of several well-known deep neural network architectures for image recognition to get a deeper understanding of where trust breaks down. The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics to help guide practitioners and regulators in producing, deploying, and certifying deep learning solutions that can be trusted to operate in real-world, mission-critical scenarios.

READ FULL TEXT

page 8

page 9

research
09/30/2020

Where Does Trust Break Down? A Quantitative Trust Analysis of Deep Neural Networks via Trust Matrix and Conditional Trust Densities

The advances and successes in deep learning in recent years have led to ...
research
11/03/2020

Insights into Fairness through Trust: Multi-scale Trust Quantification for Financial Deep Learning

The success of deep learning in recent years have led to a significant i...
research
06/14/2018

NetScore: Towards Universal Metrics for Large-scale Performance Analysis of Deep Neural Networks for Practical Usage

Much of the focus in the design of deep neural networks has been on impr...
research
06/18/2021

Residual Error: a New Performance Measure for Adversarial Robustness

Despite the significant advances in deep learning over the past decade, ...
research
04/16/2019

Investigating the Effect of Attributes on User Trust in Social Media

One main challenge in social media is to identify trustworthy informatio...
research
02/26/2019

Using Ternary Rewards to Reason over Knowledge Graphs with Deep Reinforcement Learning

In this paper, we investigate the practical challenges of using reinforc...

Please sign up or login with your details

Forgot password? Click here to reset