Adma: A Flexible Loss Function for Neural Networks

07/23/2020
by   Aditya Shrivastava, et al.
69

Highly increased interest in Artificial Neural Networks (ANNs) have resulted in impressively wide-ranging improvements in its structure. In this work, we come up with the idea that instead of static plugins that the currently available loss functions are, they should by default be flexible in nature. A flexible loss function can be a more insightful navigator for neural networks leading to higher convergence rates and therefore reaching the optimum accuracy more quickly. The insights to help decide the degree of flexibility can be derived from the complexity of ANNs, the data distribution, selection of hyper-parameters and so on. In the wake of this, we introduce a novel flexible loss function for neural networks. The function is shown to characterize a range of fundamentally unique properties from which, much of the properties of other loss functions are only a subset and varying the flexibility parameter in the function allows it to emulate the loss curves and the learning behavior of prevalent static loss functions. The extensive experimentation performed with the loss function demonstrates that it is able to give state-of-the-art performance on selected data sets. Thus, in all the idea of flexibility itself and the proposed function built upon it carry the potential to open to a new interesting chapter in deep learning research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2020

Effective Regularization Through Loss-Function Metalearning

Loss-function metalearning can be used to discover novel, customized los...
research
07/28/2023

How regularization affects the geometry of loss functions

What neural networks learn depends fundamentally on the geometry of the ...
research
01/31/2020

Evolving Loss Functions With Multivariate Taylor Polynomial Parameterizations

Loss function optimization for neural networks has recently emerged as a...
research
07/05/2023

Loss Functions and Metrics in Deep Learning. A Review

One of the essential components of deep learning is the choice of the lo...
research
02/03/2022

Certifying Out-of-Domain Generalization for Blackbox Functions

Certifying the robustness of model performance under bounded data distri...
research
08/02/2022

What can we Learn by Predicting Accuracy?

This paper seeks to answer the following question: "What can we learn by...
research
09/08/2020

Empirical Strategy for Stretching Probability Distribution in Neural-network-based Regression

In regression analysis under artificial neural networks, the prediction ...

Please sign up or login with your details

Forgot password? Click here to reset