Multi-Task Learning on Networks

12/07/2021
by   Andrea Ponti, et al.
0

The multi-task learning (MTL) paradigm can be traced back to an early paper of Caruana (1997) in which it was argued that data from multiple tasks can be used with the aim to obtain a better performance over learning each task independently. A solution of MTL with conflicting objectives requires modelling the trade-off among them which is generally beyond what a straight linear combination can achieve. A theoretically principled and computationally effective strategy is finding solutions which are not dominated by others as it is addressed in the Pareto analysis. Multi-objective optimization problems arising in the multi-task learning context have specific features and require adhoc methods. The analysis of these features and the proposal of a new computational approach represent the focus of this work. Multi-objective evolutionary algorithms (MOEAs) can easily include the concept of dominance and therefore the Pareto analysis. The major drawback of MOEAs is a low sample efficiency with respect to function evaluations. The key reason for this drawback is that most of the evolutionary approaches do not use models for approximating the objective function. Bayesian Optimization takes a radically different approach based on a surrogate model, such as a Gaussian Process. In this thesis the solutions in the Input Space are represented as probability distributions encapsulating the knowledge contained in the function evaluations. In this space of probability distributions, endowed with the metric given by the Wasserstein distance, a new algorithm MOEA/WST can be designed in which the model is not directly on the objective function but in an intermediate Information Space where the objects from the input space are mapped into histograms. Computational results show that the sample efficiency and the quality of the Pareto set provided by MOEA/WST are significantly better than in the standard MOEA.

READ FULL TEXT
research
06/29/2020

Efficient Continuous Pareto Exploration in Multi-Task Learning

Tasks in multi-task learning often correlate, conflict, or even compete ...
research
10/25/2017

Feature learning in feature-sample networks using multi-objective optimization

Data and knowledge representation are fundamental concepts in machine le...
research
12/12/2021

Gamifying optimization: a Wasserstein distance-based analysis of human search

The main objective of this paper is to outline a theoretical framework t...
research
06/29/2019

Multi-objective multi-generation Gaussian process optimizer for design optimization

We present a multi-objective optimization algorithm that uses Gaussian p...
research
05/19/2015

Necessary and Sufficient Conditions for Surrogate Functions of Pareto Frontiers and Their Synthesis Using Gaussian Processes

This paper introduces the necessary and sufficient conditions that surro...
research
10/17/2021

Pareto Navigation Gradient Descent: a First-Order Algorithm for Optimization in Pareto Set

Many modern machine learning applications, such as multi-task learning, ...
research
11/21/2020

Enhanced Innovized Repair Operator for Evolutionary Multi- and Many-objective Optimization

"Innovization" is a task of learning common relationships among some or ...

Please sign up or login with your details

Forgot password? Click here to reset