Learning to Find Correlated Features by Maximizing Information Flow in Convolutional Neural Networks

06/30/2019
by   Wei Shen, et al.
5

Training convolutional neural networks for image classification tasks usually causes information loss. Although most of the time the information lost is redundant with respect to the target task, there are still cases where discriminative information is also discarded. For example, if the samples that belong to the same category have multiple correlated features, the model may only learn a subset of the features and ignore the rest. This may not be a problem unless the classification in the test set highly depends on the ignored features. We argue that the discard of the correlated discriminative information is partially caused by the fact that the minimization of the classification loss doesn't ensure to learn the overall discriminative information but only the most discriminative information. To address this problem, we propose an information flow maximization (IFM) loss as a regularization term to find the discriminative correlated features. With less information loss the classifier can make predictions based on more informative features. We validate our method on the shiftedMNIST dataset and show the effectiveness of IFM loss in learning representative and discriminative features.

READ FULL TEXT
research
10/21/2020

TargetDrop: A Targeted Regularization Method for Convolutional Neural Networks

Dropout regularization has been widely used in deep learning but perform...
research
05/23/2022

Discriminative Feature Learning through Feature Distance Loss

Convolutional neural networks have shown remarkable ability to learn dis...
research
12/16/2014

Locally Scale-Invariant Convolutional Neural Networks

Convolutional Neural Networks (ConvNets) have shown excellent results on...
research
04/25/2019

Learning Discriminative Features Via Weights-biased Softmax Loss

Loss functions play a key role in training superior deep neural networks...
research
02/14/2022

Discriminability-enforcing loss to improve representation learning

During the training process, deep neural networks implicitly learn to re...
research
02/27/2019

Fix Your Features: Stationary and Maximally Discriminative Embeddings using Regular Polytope (Fixed Classifier) Networks

Neural networks are widely used as a model for classification in a large...
research
02/07/2019

CHIP: Channel-wise Disentangled Interpretation of Deep Convolutional Neural Networks

With the widespread applications of deep convolutional neural networks (...

Please sign up or login with your details

Forgot password? Click here to reset