Competing Ratio Loss for Discriminative Multi-class Image Classification

07/31/2019
by   Ke Zhang, et al.
2

The development of deep convolutional neural network architecture is critical to the improvement of image classification task performance. A lot of studies of image classification based on deep convolutional neural network focus on the network structure to improve the image classification performance. Contrary to these studies, we focus on the loss function. Cross-entropy Loss (CEL) is widely used for training a multi-class classification deep convolutional neural network. While CEL has been successfully implemented in image classification tasks, it only focuses on the posterior probability of correct class when the labels of training images are one-hot. It cannot be discriminated against the classes not belong to correct class (wrong classes) directly. In order to solve the problem of CEL, we propose Competing Ratio Loss (CRL), which calculates the posterior probability ratio between the correct class and competing wrong classes to better discriminate the correct class from competing wrong classes, increasing the difference between the negative log likelihood of the correct class and the negative log likelihood of competing wrong classes, widening the difference between the probability of the correct class and the probabilities of wrong classes. To demonstrate the effectiveness of our loss function, we perform some sets of experiments on different types of image classification datasets, including CIFAR, SVHN, CUB200- 2011, Adience and ImageNet datasets. The experimental results show the effectiveness and robustness of our loss function on different deep convolutional neural network architectures and different image classification tasks, such as fine-grained image classification, hard face age estimation and large-scale image classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 10

research
04/27/2018

Negative Log Likelihood Ratio Loss for Deep Neural Network Classification

In deep neural network, the cross-entropy loss function is commonly used...
research
04/20/2017

Every Untrue Label is Untrue in its Own Way: Controlling Error Type with the Log Bilinear Loss

Deep learning has become the method of choice in many application domain...
research
03/06/2020

SimLoss: Class Similarities in Cross Entropy

One common loss function in neural network classification tasks is Categ...
research
06/27/2018

This looks like that: deep learning for interpretable image recognition

When we are faced with challenging image classification tasks, we often ...
research
12/30/2021

Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI

This paper quantifies the quality of heatmap-based eXplainable AI method...
research
04/04/2023

Multi-Class Explainable Unlearning for Image Classification via Weight Filtering

Machine Unlearning has recently been emerging as a paradigm for selectiv...
research
07/27/2019

Forced Spatial Attention for Driver Foot Activity Classification

This paper provides a simple solution for reliably solving image classif...

Please sign up or login with your details

Forgot password? Click here to reset