Self-Evolutionary Optimization for Pareto Front Learning

10/07/2021
by   Simyung Chang, et al.
0

Multi-task learning (MTL), which aims to improve performance by learning multiple tasks simultaneously, inherently presents an optimization challenge due to multiple objectives. Hence, multi-objective optimization (MOO) approaches have been proposed for multitasking problems. Recent MOO methods approximate multiple optimal solutions (Pareto front) with a single unified model, which is collectively referred to as Pareto front learning (PFL). In this paper, we show that PFL can be re-formulated into another MOO problem with multiple objectives, each of which corresponds to different preference weights for the tasks. We leverage an evolutionary algorithm (EA) to propose a method for PFL called self-evolutionary optimization (SEO) by directly maximizing the hypervolume. By using SEO, the neural network learns to approximate the Pareto front conditioned on multiple hyper-parameters that drastically affect the hypervolume. Then, by generating a population of approximations simply by inferencing the network, the hyper-parameters of the network can be optimized by EA. Utilizing SEO for PFL, we also introduce self-evolutionary Pareto networks (SEPNet), enabling the unified model to approximate the entire Pareto front set that maximizes the hypervolume. Extensive experimental results confirm that SEPNet can find a better Pareto front than the current state-of-the-art methods while minimizing the increase in model size and training cost.

READ FULL TEXT
research
02/06/2023

Bi-level Multi-objective Evolutionary Learning: A Case Study on Multi-task Graph Neural Topology Search

The construction of machine learning models involves many bi-level multi...
research
02/08/2011

Evolutionary multiobjective optimization of the multi-location transshipment problem

We consider a multi-location inventory system where inventory choices at...
research
10/08/2020

Learning the Pareto Front with Hypernetworks

Multi-objective optimization problems are prevalent in machine learning....
research
03/24/2021

Efficient Multi-Objective Optimization for Deep Learning

Multi-objective optimization (MOO) is a prevalent challenge for Deep Lea...
research
04/11/2022

Pareto Conditioned Networks

In multi-objective optimization, learning all the policies that reach Pa...
research
12/15/2013

An introduction to synchronous self-learning Pareto strategy

In last decades optimization and control of complex systems that possess...
research
10/14/2022

Efficiently Controlling Multiple Risks with Pareto Testing

Machine learning applications frequently come with multiple diverse obje...

Please sign up or login with your details

Forgot password? Click here to reset