Multi-task problems are not multi-objective
Multi-objective optimization (MOO) aims at finding a set of optimal configurations for a given set of objectives. A recent line of work applies MOO methods to the typical Machine Learning (ML) setting, which becomes multi-objective if a model should optimize more than one objective, for instance in fair machine learning. These works also use Multi-Task Learning (MTL) problems to benchmark MOO algorithms treating each task as independent objective. In this work we show that MTL problems do not resemble the characteristics of MOO problems. In particular, MTL losses are not competing in case of a sufficiently expressive single model. As a consequence, a single model can perform just as well as optimizing all objectives with independent models, rendering MOO inapplicable. We provide evidence with extensive experiments on the widely used Multi-Fashion-MNIST datasets. Our results call for new benchmarks to evaluate MOO algorithms for ML. Our code is available at: https://github.com/ruchtem/moo-mtl.
READ FULL TEXT