Multi-Domain Learning by Meta-Learning: Taking Optimal Steps in Multi-Domain Loss Landscapes by Inner-Loop Learning

02/25/2021
by   Anthony Sicilia, et al.
7

We consider a model-agnostic solution to the problem of Multi-Domain Learning (MDL) for multi-modal applications. Many existing MDL techniques are model-dependent solutions which explicitly require nontrivial architectural changes to construct domain-specific modules. Thus, properly applying these MDL techniques for new problems with well-established models, e.g. U-Net for semantic segmentation, may demand various low-level implementation efforts. In this paper, given emerging multi-modal data (e.g., various structural neuroimaging modalities), we aim to enable MDL purely algorithmically so that widely used neural networks can trivially achieve MDL in a model-independent manner. To this end, we consider a weighted loss function and extend it to an effective procedure by employing techniques from the recently active area of learning-to-learn (meta-learning). Specifically, we take inner-loop gradient steps to dynamically estimate posterior distributions over the hyperparameters of our loss function. Thus, our method is model-agnostic, requiring no additional model parameters and no network architecture changes; instead, only a few efficient algorithmic modifications are needed to improve performance in MDL. We demonstrate our solution to a fitting problem in medical imaging, specifically, in the automatic segmentation of white matter hyperintensity (WMH). We look at two neuroimaging modalities (T1-MR and FLAIR) with complementary information fitting for our problem.

READ FULL TEXT
research
06/16/2020

Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters

Although model-agnostic meta-learning (MAML) is a very successful algori...
research
05/11/2021

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

Multi-modal learning, which focuses on utilizing various modalities to i...
research
10/17/2022

Meta-Learning via Classifier(-free) Guidance

State-of-the-art meta-learning techniques do not optimize for zero-shot ...
research
10/18/2018

Gradient Agreement as an Optimization Objective for Meta-Learning

This paper presents a novel optimization method for maximizing generaliz...
research
06/06/2018

Meta Learning by the Baldwin Effect

The scope of the Baldwin effect was recently called into question by two...
research
05/11/2022

Improved Meta Learning for Low Resource Speech Recognition

We propose a new meta learning based framework for low resource speech r...
research
05/19/2023

ALT: An Automatic System for Long Tail Scenario Modeling

In this paper, we consider the problem of long tail scenario modeling wi...

Please sign up or login with your details

Forgot password? Click here to reset