Mixture separability loss in a deep convolutional network for image classification

06/16/2019
by   Trung Dung Do, et al.
2

In machine learning, the cost function is crucial because it measures how good or bad a system is. In image classification, well-known networks only consider modifying the network structures and applying cross-entropy loss at the end of the network. However, using only cross-entropy loss causes a network to stop updating weights when all training images are correctly classified. This is the problem of the early saturation. This paper proposes a novel cost function, called mixture separability loss (MSL), which updates the weights of the network even when most of the training images are accurately predicted. MSL consists of between-class and within-class loss. Between-class loss maximizes the differences between inter-class images, whereas within-class loss minimizes the similarities between intra-class images. We designed the proposed loss function to attach to different convolutional layers in the network in order to utilize intermediate feature maps. Experiments show that a network with MSL deepens the learning process and obtains promising results with some public datasets, such as Street View House Number (SVHN), Canadian Institute for Advanced Research (CIFAR), and our self-collected Inha Computer Vision Lab (ICVL) gender dataset.

READ FULL TEXT

page 1

page 4

page 6

research
04/27/2018

Negative Log Likelihood Ratio Loss for Deep Neural Network Classification

In deep neural network, the cross-entropy loss function is commonly used...
research
04/12/2019

A New Loss Function for CNN Classifier Based on Pre-defined Evenly-Distributed Class Centroids

With the development of convolutional neural networks (CNNs) in recent y...
research
10/30/2020

What's in a Loss Function for Image Classification?

It is common to use the softmax cross-entropy loss to train neural netwo...
research
03/06/2020

SimLoss: Class Similarities in Cross Entropy

One common loss function in neural network classification tasks is Categ...
research
07/20/2021

Understanding Gender and Racial Disparities in Image Recognition Models

Large scale image classification models trained on top of popular datase...
research
09/18/2020

σ^2R Loss: a Weighted Loss by Multiplicative Factors using Sigmoidal Functions

In neural networks, the loss function represents the core of the learnin...
research
11/16/2020

Redesigning the classification layer by randomizing the class representation vectors

Neural image classification models typically consist of two components. ...

Please sign up or login with your details

Forgot password? Click here to reset