DeepAI AI Chat
Log In Sign Up

Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning

by   Clemens Rosenbaum, et al.

Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces transfer. To address this issue we introduce the routing network paradigm, a novel neural network unit and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network - for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when the router decides to stop or a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate significant improvement in accuracy with sharper convergence over challenging joint training baselines for these tasks.


page 1

page 2

page 3

page 4


Gumbel-Matrix Routing for Flexible Multi-task Learning

This paper proposes a novel per-task routing method for multi-task appli...

Measuring and Harnessing Transference in Multi-Task Learning

Multi-task learning can leverage information learned by one task to bene...

Many Task Learning with Task Routing

Typical multi-task learning (MTL) methods rely on architectural adjustme...

SkipNet: Learning Dynamic Routing in Convolutional Networks

Increasing depth and complexity in convolutional neural networks has ena...

Multi-Task Reinforcement Learning with Soft Modularization

Multi-task learning is a very challenging problem in reinforcement learn...

Multi-Head Adapter Routing for Data-Efficient Fine-Tuning

Parameter-efficient fine-tuning (PEFT) methods can adapt large language ...

DRAGNN: A Transition-based Framework for Dynamically Connected Neural Networks

In this work, we present a compact, modular framework for constructing n...