Optimizing Evaluation Metrics for Multi-Task Learning via the Alternating Direction Method of Multipliers

10/12/2022
by   Ge-Yang Ke, et al.
0

Multi-task learning (MTL) aims to improve the generalization performance of multiple tasks by exploiting the shared factors among them. Various metrics (e.g., F-score, Area Under the ROC Curve) are used to evaluate the performances of MTL methods. Most existing MTL methods try to minimize either the misclassified errors for classification or the mean squared errors for regression. In this paper, we propose a method to directly optimize the evaluation metrics for a large family of MTL problems. The formulation of MTL that directly optimizes evaluation metrics is the combination of two parts: (1) a regularizer defined on the weight matrix over all tasks, in order to capture the relatedness of these tasks; (2) a sum of multiple structured hinge losses, each corresponding to a surrogate of some evaluation metric on one task. This formulation is challenging in optimization because both of its parts are non-smooth. To tackle this issue, we propose a novel optimization procedure based on the alternating direction scheme of multipliers, where we decompose the whole optimization problem into a sub-problem corresponding to the regularizer and another sub-problem corresponding to the structured hinge losses. For a large family of MTL problems, the first sub-problem has closed-form solutions. To solve the second sub-problem, we propose an efficient primal-dual algorithm via coordinate ascent. Extensive evaluation results demonstrate that, in a large family of MTL problems, the proposed MTL method of directly optimization evaluation metrics has superior performance gains against the corresponding baseline methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2022

Relational Surrogate Loss Learning

Evaluation metrics in machine learning are often hardly taken as loss fu...
research
02/14/2017

Efficient Multi-task Feature and Relationship Learning

In this paper we propose a multi-convex framework for multi-task learnin...
research
07/14/2020

Follow the bisector: a simple method for multi-objective optimization

This study presents a novel Equiangular Direction Method (EDM) to solve ...
research
06/13/2016

Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity

The use of convex regularizers allows for easy optimization, though they...
research
06/21/2023

STAN: Stage-Adaptive Network for Multi-Task Recommendation by Learning User Lifecycle-Based Representation

Recommendation systems play a vital role in many online platforms, with ...
research
10/09/2020

Measuring What Counts: The case of Rumour Stance Classification

Stance classification can be a powerful tool for understanding whether a...
research
12/28/2012

Alternating Directions Dual Decomposition

We propose AD3, a new algorithm for approximate maximum a posteriori (MA...

Please sign up or login with your details

Forgot password? Click here to reset