Birds of a Feather Trust Together: Knowing When to Trust a Classifier via Adaptive Neighborhood Aggregation

11/29/2022
by   Miao Xiong, et al.
0

How do we know when the predictions made by a classifier can be trusted? This is a fundamental problem that also has immense practical applicability, especially in safety-critical areas such as medicine and autonomous driving. The de facto approach of using the classifier's softmax outputs as a proxy for trustworthiness suffers from the over-confidence issue; while the most recent works incur problems such as additional retraining cost and accuracy versus trustworthiness trade-off. In this work, we argue that the trustworthiness of a classifier's prediction for a sample is highly associated with two factors: the sample's neighborhood information and the classifier's output. To combine the best of both worlds, we design a model-agnostic post-hoc approach NeighborAgg to leverage the two essential information via an adaptive neighborhood aggregation. Theoretically, we show that NeighborAgg is a generalized version of a one-hop graph convolutional network, inheriting the powerful modeling ability to capture the varying similarity between samples within each class. We also extend our approach to the closely related task of mislabel detection and provide a theoretical coverage guarantee to bound the false negative. Empirically, extensive experiments on image and tabular benchmarks verify our theory and suggest that NeighborAgg outperforms other methods, achieving state-of-the-art trustworthiness performance.

READ FULL TEXT

page 8

page 19

page 26

research
05/30/2018

To Trust Or Not To Trust A Classifier

Knowing when a classifier's prediction can be trusted is useful in many ...
research
05/31/2018

Fusion Graph Convolutional Networks

Semi-supervised node classification involves learning to classify unlabe...
research
06/08/2021

Provably Robust Detection of Out-of-distribution Data (almost) for free

When applying machine learning in safety-critical systems, a reliable as...
research
09/26/2019

Towards neural networks that provably know when they don't know

It has recently been shown that ReLU networks produce arbitrarily over-c...
research
04/24/2022

Less is More: Reweighting Important Spectral Graph Features for Recommendation

As much as Graph Convolutional Networks (GCNs) have shown tremendous suc...
research
08/31/2022

Be Your Own Neighborhood: Detecting Adversarial Example by the Neighborhood Relations Built on Self-Supervised Learning

Deep Neural Networks (DNNs) have achieved excellent performance in vario...

Please sign up or login with your details

Forgot password? Click here to reset