Composite Learning for Robust and Effective Dense Predictions

10/13/2022
by   Menelaos Kanakis, et al.
7

Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task. However, the current practice requires additional labeling efforts for the auxiliary task, while not guaranteeing better model performance. In this paper, we find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks. We refer to this joint training as Composite Learning (CompL). Experiments of CompL on monocular depth estimation, semantic segmentation, and boundary detection show consistent performance improvements in fully and partially labeled datasets. Further analysis on depth estimation reveals that joint training with self-supervision outperforms most labeled auxiliary tasks. We also find that CompL can improve model robustness when the models are evaluated in new domains. These results demonstrate the benefits of self-supervision as an auxiliary task, and establish the design of novel task-specific self-supervised methods as a new axis of investigation for future multi-task learning research.

READ FULL TEXT

page 5

page 6

page 11

research
04/23/2020

Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation

While current approaches for neural network training often aim at improv...
research
11/07/2018

Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks

The training of many existing end-to-end steering angle prediction model...
research
04/03/2023

Joint 2D-3D Multi-Task Learning on Cityscapes-3D: 3D Detection, Segmentation, and Depth Estimation

This report serves as a supplementary document for TaskPrompter, detaili...
research
12/21/2021

Generalizing Interactive Backpropagating Refinement for Dense Prediction

As deep neural networks become the state-of-the-art approach in the fiel...
research
11/28/2018

GIRNet: Interleaved Multi-Task Recurrent State Sequence Models

In several natural language tasks, labeled sequences are available in se...
research
02/05/2023

Multi-Task Self-Supervised Learning for Image Segmentation Task

Thanks to breakthroughs in AI and Deep learning methodology, Computer vi...
research
01/07/2022

Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big Task

Multi-task learning is to improve the performance of the model by transf...

Please sign up or login with your details

Forgot password? Click here to reset