Interpreting Bias in the Neural Networks: A Peek Into Representational Similarity

11/14/2022
by   Gnyanesh Bangaru, et al.
0

Neural networks trained on standard image classification data sets are shown to be less resistant to data set bias. It is necessary to comprehend the behavior objective function that might correspond to superior performance for data with biases. However, there is little research on the selection of the objective function and its representational structure when trained on data set with biases. In this paper, we investigate the performance and internal representational structure of convolution-based neural networks (e.g., ResNets) trained on biased data using various objective functions. We specifically study similarities in representations, using Centered Kernel Alignment (CKA), for different objective functions (probabilistic and margin-based) and offer a comprehensive analysis of the chosen ones. According to our findings, ResNets representations obtained with Negative Log Likelihood (ℒ_NLL) and Softmax Cross-Entropy (ℒ_SCE) as loss functions are equally capable of producing better performance and fine representations on biased data. We note that without progressive representational similarities among the layers of a neural network, the performance is less likely to be robust.

READ FULL TEXT
research
10/30/2020

What's in a Loss Function for Image Classification?

It is common to use the softmax cross-entropy loss to train neural netwo...
research
02/05/2021

On the estimating equations and objective functions for parameters of exponential power distribution: Application for disorder

The efficient modeling for disorder in a phenomena depends on the chosen...
research
05/24/2019

Neuro-Optimization: Learning Objective Functions Using Neural Networks

Mathematical optimization is widely used in various research fields. Wit...
research
09/28/2020

Why resampling outperforms reweighting for correcting sampling bias

A data set sampled from a certain population is biased if the subgroups ...
research
04/05/2022

OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses

Dataset bias and spurious correlations can significantly impair generali...
research
05/01/2019

On Expected Accuracy

We empirically investigate the (negative) expected accuracy as an altern...
research
09/29/2021

Improvising the Learning of Neural Networks on Hyperspherical Manifold

The impact of convolution neural networks (CNNs) in the supervised setti...

Please sign up or login with your details

Forgot password? Click here to reset