Where Does Trust Break Down? A Quantitative Trust Analysis of Deep Neural Networks via Trust Matrix and Conditional Trust Densities

09/30/2020
by   Andrew Hryniowski, et al.
0

The advances and successes in deep learning in recent years have led to considerable efforts and investments into its widespread ubiquitous adoption for a wide variety of applications, ranging from personal assistants and intelligent navigation to search and product recommendation in e-commerce. With this tremendous rise in deep learning adoption comes questions about the trustworthiness of the deep neural networks that power these applications. Motivated to answer such questions, there has been a very recent interest in trust quantification. In this work, we introduce the concept of trust matrix, a novel trust quantification strategy that leverages the recently introduced question-answer trust metric by Wong et al. to provide deeper, more detailed insights into where trust breaks down for a given deep neural network given a set of questions. More specifically, a trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario, allowing one to quickly spot areas of low trust that needs to be addressed to improve the trustworthiness of a deep neural network. The proposed trust matrix is simple to calculate, humanly interpretable, and to the best of the authors' knowledge is the first to study trust at the actor-oracle answer level. We further extend the concept of trust densities with the notion of conditional trust densities. We experimentally leverage trust matrices to study several well-known deep neural network architectures for image recognition, and further study the trust density and conditional trust densities for an interesting actor-oracle answer scenario. The results illustrate that trust matrices, along with conditional trust densities, can be useful tools in addition to the existing suite of trust quantification metrics for guiding practitioners and regulators in creating and certifying deep learning solutions for trusted operation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2020

How Much Can We Really Trust You? Towards Simple, Interpretable Trust Quantification Metrics for Deep Neural Networks

A critical step to building trustworthy deep neural networks is trust qu...
research
11/03/2020

Insights into Fairness through Trust: Multi-scale Trust Quantification for Financial Deep Learning

The success of deep learning in recent years have led to a significant i...
research
05/22/2019

Ellipsoidal Trust Region Methods and the Marginal Value of Hessian Information for Neural Network Training

We investigate the use of ellipsoidal trust region constraints for secon...
research
08/01/2022

What do Deep Neural Networks Learn in Medical Images?

Deep learning is increasingly gaining rapid adoption in healthcare to he...
research
09/30/2020

AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via Visual Attention Condensers

While significant advances in deep learning has resulted in state-of-the...
research
07/13/2022

Grassmanian packings: Trust region stochastic tuning for matrix incoherence

We provide a new numerical procedure for constructing low coherence matr...

Please sign up or login with your details

Forgot password? Click here to reset