No Cost Likelihood Manipulation at Test Time for Making Better Mistakes in Deep Networks

04/01/2021
by   Shyamgopal Karthik, et al.
0

There has been increasing interest in building deep hierarchy-aware classifiers that aim to quantify and reduce the severity of mistakes, and not just reduce the number of errors. The idea is to exploit the label hierarchy (e.g., the WordNet ontology) and consider graph distances as a proxy for mistake severity. Surprisingly, on examining mistake-severity distributions of the top-1 prediction, we find that current state-of-the-art hierarchy-aware deep classifiers do not always show practical improvement over the standard cross-entropy baseline in making better mistakes. The reason for the reduction in average mistake-severity can be attributed to the increase in low-severity mistakes, which may also explain the noticeable drop in their accuracy. To this end, we use the classical Conditional Risk Minimization (CRM) framework for hierarchy-aware classification. Given a cost matrix and a reliable estimate of likelihoods (obtained from a trained network), CRM simply amends mistakes at inference time; it needs no extra hyperparameters and requires adding just a few lines of code to the standard cross-entropy baseline. It significantly outperforms the state-of-the-art and consistently obtains large reductions in the average hierarchical distance of top-k predictions across datasets, with very little loss in accuracy. CRM, because of its simplicity, can be used with any off-the-shelf trained model that provides reliable likelihood estimates.

READ FULL TEXT
research
07/26/2022

Learning Hierarchy Aware Features for Reducing Mistake Severity

Label hierarchies are often available apriori as part of biological taxo...
research
06/17/2022

All Mistakes Are Not Equal: Comprehensive Hierarchy Aware Multi-label Predictions (CHAMP)

This paper considers the problem of Hierarchical Multi-Label Classificat...
research
02/01/2023

Test-Time Amendment with a Coarse Classifier for Fine-Grained Classification

We investigate the problem of reducing mistake severity for fine-grained...
research
07/25/2018

A Surprising Linear Relationship Predicts Test Performance in Deep Networks

Given two networks with the same training loss on a dataset, when would ...
research
03/10/2023

Inducing Neural Collapse to a Fixed Hierarchy-Aware Frame for Reducing Mistake Severity

There is a recently discovered and intriguing phenomenon called Neural C...
research
04/30/2021

Embedding Semantic Hierarchy in Discrete Optimal Transport for Risk Minimization

The widely-used cross-entropy (CE) loss-based deep networks achieved sig...
research
12/19/2019

Making Better Mistakes: Leveraging Class Hierarchies with Deep Networks

Deep neural networks have improved image classification dramatically ove...

Please sign up or login with your details

Forgot password? Click here to reset