Multi-Task Learning by Deep Collaboration and Application in Facial Landmark Detection

10/28/2017
by   Ludovic Trottier, et al.
0

Convolutional neural networks (CNN) have become the most successful and popular approach in many vision-related domains. While CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, they are limited to domains where data is abundant. Recent attempts have looked into mitigating this data scarcity problem by casting their original single-task problem into a new multi-task learning (MTL) problem. The main goal of this inductive transfer mechanism is to leverage domain-specific information from related tasks, in order to improve generalization on the main task. While recent results in the deep learning (DL) community have shown the promising potential of training task-specific CNNs in a soft parameter sharing framework, integrating the recent DL advances for improving knowledge sharing is still an open problem. In this paper, we propose the Deep Collaboration Network (DCNet), a novel approach for connecting task-specific CNNs in a MTL framework. We define connectivity in terms of two distinct non-linear transformations. One aggregates task-specific features into global features, while the other merges back the global features with each task-specific network. Based on the observation that task relevance depends on depth, our transformations use skip connections as suggested by residual networks, to more easily deactivate unrelated task-dependent features. To validate our approach, we employ facial landmark detection (FLD) datasets as they are readily amenable to MTL, given the number of tasks they include. Experimental results show that we can achieve up to 24.31 state-of-the-art MTL approaches. We finally perform an ablation study showing that our approach effectively allows knowledge sharing, by leveraging domain-specific features at particular depths from tasks that we know are related.

READ FULL TEXT

page 3

page 5

page 6

page 7

research
05/23/2017

Sluice networks: Learning what to share between loosely related tasks

Multi-task learning is partly motivated by the observation that humans b...
research
08/26/2019

Stochastic Filter Groups for Multi-Task CNNs: Learning Specialist and Generalist Convolution Kernels

The performance of multi-task learning in Convolutional Neural Networks ...
research
05/26/2023

DynaShare: Task and Instance Conditioned Parameter Sharing for Multi-Task Learning

Multi-task networks rely on effective parameter sharing to achieve robus...
research
10/27/2022

Multi-task Bias-Variance Trade-off Through Functional Constraints

Multi-task learning aims to acquire a set of functions, either regressor...
research
04/29/2020

Emerging Relation Network and Task Embedding for Multi-Task Regression Problems

Multi-task learning (mtl) provides state-of-the-art results in many appl...
research
08/04/2020

Guiding CNNs towards Relevant Concepts by Multi-task and Adversarial Learning

The opaqueness of deep learning limits its deployment in critical applic...
research
07/06/2020

Meta-Learning Symmetries by Reparameterization

Many successful deep learning architectures are equivariant to certain t...

Please sign up or login with your details

Forgot password? Click here to reset