Single-Solution Hypervolume Maximization and its use for Improving Generalization of Neural Networks

02/03/2016
by   Conrado S. Miranda, et al.
0

This paper introduces the hypervolume maximization with a single solution as an alternative to the mean loss minimization. The relationship between the two problems is proved through bounds on the cost function when an optimal solution to one of the problems is evaluated on the other, with a hyperparameter to control the similarity between the two problems. This same hyperparameter allows higher weight to be placed on samples with higher loss when computing the hypervolume's gradient, whose normalized version can range from the mean loss to the max loss. An experiment on MNIST with a neural network is used to validate the theory developed, showing that the hypervolume maximization can behave similarly to the mean loss minimization and can also provide better performance, resulting on a 20 test set.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2020

Guiding Neural Network Initialization via Marginal Likelihood Maximization

We propose a simple, data-driven approach to help guide hyperparameter s...
research
01/01/2022

On the representativeness of approximate solutions of discrete optimization problems with interval cost function

We consider discrete optimization problems with interval uncertainty of ...
research
04/26/2019

Think Again Networks and the Delta Loss

This short paper introduces an abstraction called Think Again Networks (...
research
03/09/2019

Robust Influence Maximization for Hyperparametric Models

In this paper we study the problem of robust influence maximization in t...
research
06/03/2015

Multi-Objective Optimization for Self-Adjusting Weighted Gradient in Machine Learning Tasks

Much of the focus in machine learning research is placed in creating new...

Please sign up or login with your details

Forgot password? Click here to reset