AdaMTL: Adaptive Input-dependent Inference for Efficient Multi-Task Learning

04/17/2023
by   Marina Neseem, et al.
0

Modern Augmented reality applications require performing multiple tasks on each input frame simultaneously. Multi-task learning (MTL) represents an effective approach where multiple tasks share an encoder to extract representative features from the input frame, followed by task-specific decoders to generate predictions for each task. Generally, the shared encoder in MTL models needs to have a large representational capacity in order to generalize well to various tasks and input data, which has a negative effect on the inference latency. In this paper, we argue that due to the large variations in the complexity of the input frames, some computations might be unnecessary for the output. Therefore, we introduce AdaMTL, an adaptive framework that learns task-aware inference policies for the MTL models in an input-dependent manner. Specifically, we attach a task-aware lightweight policy network to the shared encoder and co-train it alongside the MTL model to recognize unnecessary computations. During runtime, our task-aware policy network decides which parts of the model to activate depending on the input frame and the target computational complexity. Extensive experiments on the PASCAL dataset demonstrate that AdaMTL reduces the computational complexity by 43 improving the accuracy by 1.32 SOTA MTL methodologies, AdaMTL boosts the accuracy by 7.8 efficiency by 3.1X. When deployed on Vuzix M4000 smart glasses, AdaMTL reduces the inference latency and the energy consumption by up to 21.8 respectively, compared to the static MTL model. Our code is publicly available at https://github.com/scale-lab/AdaMTL.git.

READ FULL TEXT

page 3

page 7

research
07/28/2023

TaskExpert: Dynamically Assembling Multi-Task Representations with Memorial Mixture-of-Experts

Learning discriminative task-specific features simultaneously for multip...
research
02/18/2020

Multi-Task Learning from Videos via Efficient Inter-Frame Attention

Prior work in multi-task learning has mainly focused on predictions on a...
research
08/30/2023

Latency-aware Unified Dynamic Networks for Efficient Image Recognition

Dynamic computation has emerged as a promising avenue to enhance the inf...
research
04/11/2022

MIME: Adapting a Single Neural Network for Multi-task Inference with Memory-efficient Dynamic Pruning

Recent years have seen a paradigm shift towards multi-task learning. Thi...
research
10/26/2022

M^3ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design

Multi-task learning (MTL) encapsulates multiple learned tasks in a singl...
research
04/03/2020

Context-Aware Multi-Task Learning for Traffic Scene Recognition in Autonomous Vehicles

Traffic scene recognition, which requires various visual classification ...
research
01/18/2022

Motion Inbetweening via Deep Δ-Interpolator

We show that the task of synthesizing missing middle frames, commonly kn...

Please sign up or login with your details

Forgot password? Click here to reset