Log In Sign Up

Competing Ratio Loss for Discriminative Multi-class Image Classification

by   Ke Zhang, et al.

The development of deep convolutional neural network architecture is critical to the improvement of image classification task performance. A lot of studies of image classification based on deep convolutional neural network focus on the network structure to improve the image classification performance. Contrary to these studies, we focus on the loss function. Cross-entropy Loss (CEL) is widely used for training a multi-class classification deep convolutional neural network. While CEL has been successfully implemented in image classification tasks, it only focuses on the posterior probability of correct class when the labels of training images are one-hot. It cannot be discriminated against the classes not belong to correct class (wrong classes) directly. In order to solve the problem of CEL, we propose Competing Ratio Loss (CRL), which calculates the posterior probability ratio between the correct class and competing wrong classes to better discriminate the correct class from competing wrong classes, increasing the difference between the negative log likelihood of the correct class and the negative log likelihood of competing wrong classes, widening the difference between the probability of the correct class and the probabilities of wrong classes. To demonstrate the effectiveness of our loss function, we perform some sets of experiments on different types of image classification datasets, including CIFAR, SVHN, CUB200- 2011, Adience and ImageNet datasets. The experimental results show the effectiveness and robustness of our loss function on different deep convolutional neural network architectures and different image classification tasks, such as fine-grained image classification, hard face age estimation and large-scale image classification.


page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 10


Negative Log Likelihood Ratio Loss for Deep Neural Network Classification

In deep neural network, the cross-entropy loss function is commonly used...

Every Untrue Label is Untrue in its Own Way: Controlling Error Type with the Log Bilinear Loss

Deep learning has become the method of choice in many application domain...

Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI

This paper quantifies the quality of heatmap-based eXplainable AI method...

SimLoss: Class Similarities in Cross Entropy

One common loss function in neural network classification tasks is Categ...

Forced Spatial Attention for Driver Foot Activity Classification

This paper provides a simple solution for reliably solving image classif...

This looks like that: deep learning for interpretable image recognition

When we are faced with challenging image classification tasks, we often ...

Implementation of Deep Convolutional Neural Network in Multi-class Categorical Image Classification

Convolutional Neural Networks has been implemented in many complex machi...