Learning to Predict Trustworthiness with Steep Slope Loss

09/30/2021
by   Yan Luo, et al.
0

Understanding the trustworthiness of a prediction yielded by a classifier is critical for the safe and effective use of AI models. Prior efforts have been proven to be reliable on small-scale datasets. In this work, we study the problem of predicting trustworthiness on real-world large-scale datasets, where the task is more challenging due to high-dimensional features, diverse visual concepts, and large-scale samples. In such a setting, we observe that the trustworthiness predictors trained with prior-art loss functions, i.e., the cross entropy loss, focal loss, and true class probability confidence loss, are prone to view both correct predictions and incorrect predictions to be trustworthy. The reasons are two-fold. Firstly, correct predictions are generally dominant over incorrect predictions. Secondly, due to the data complexity, it is challenging to differentiate the incorrect predictions from the correct ones on real-world large-scale datasets. To improve the generalizability of trustworthiness predictors, we propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other. The proposed loss is evaluated with two representative deep learning models, i.e., Vision Transformer and ResNet, as trustworthiness predictors. We conduct comprehensive experiments and analyses on ImageNet, which show that the proposed loss effectively improves the generalizability of trustworthiness predictors. The code and pre-trained trustworthiness predictors for reproducibility are available at https://github.com/luoyan407/predict_trustworthiness.

READ FULL TEXT
research
06/08/2023

Reevaluating Loss Functions: Enhancing Robustness to Label Noise in Deep Learning Models

Large annotated datasets inevitably contain incorrect labels, which pose...
research
03/25/2019

Noise-Tolerant Paradigm for Training Face Recognition CNNs

Benefit from large-scale training datasets, deep Convolutional Neural Ne...
research
03/02/2023

Multi-Head Multi-Loss Model Calibration

Delivering meaningful uncertainty estimates is essential for a successfu...
research
08/07/2023

Distributionally Robust Classification on a Data Budget

Real world uses of deep learning require predictable model behavior unde...
research
10/15/2021

Identifying Incorrect Classifications with Balanced Uncertainty

Uncertainty estimation is critical for cost-sensitive deep-learning appl...
research
01/10/2020

Improving Image Autoencoder Embeddings with Perceptual Loss

Autoencoders are commonly trained using element-wise loss. However, elem...
research
08/09/2022

Sports Video Analysis on Large-Scale Data

This paper investigates the modeling of automated machine description on...

Please sign up or login with your details

Forgot password? Click here to reset