On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning

02/20/2021
by   Ren Wang, et al.
19

Model-agnostic meta-learning (MAML) has emerged as one of the most successful meta-learning techniques in few-shot learning. It enables us to learn a meta-initialization of model parameters (that we call meta-model) to rapidly adapt to new tasks using a small amount of labeled training data. Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning. In addition to generalization, robustness is also desired for a meta-model to defend adversarial examples (attacks). Toward promoting adversarial robustness in MAML, we first study WHEN a robustness-promoting regularization should be incorporated, given the fact that MAML adopts a bi-level (fine-tuning vs. meta-update) learning procedure. We show that robustifying the meta-update stage is sufficient to make robustness adapted to the task-specific fine-tuning stage even if the latter uses a standard training protocol. We also make additional justification on the acquired robustness adaptation by peering into the interpretability of neurons' activation maps. Furthermore, we investigate HOW robust regularization can efficiently be designed in MAML. We propose a general but easily-optimized robustness-regularized meta-learning framework, which allows the use of unlabeled data augmentation, fast adversarial attack generation, and computationally-light fine-tuning. In particular, we for the first time show that the auxiliary contrastive learning task can enhance the adversarial robustness of MAML. Finally, extensive experiments are conducted to demonstrate the effectiveness of our proposed methods in robust few-shot learning.

READ FULL TEXT

page 5

page 14

research
09/29/2020

Learned Fine-Tuner for Incongruous Few-Shot Learning

Model-agnostic meta-learning (MAML) effectively meta-learns an initializ...
research
11/28/2022

Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning

Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a...
research
10/02/2019

Robust Few-Shot Learning with Adversarially Queried Meta-Learners

Previous work on adversarially robust neural networks requires large tra...
research
10/19/2022

Few-shot Transferable Robust Representation Learning via Bilevel Attacks

Existing adversarial learning methods for enhancing the robustness of de...
research
02/14/2021

Model-Agnostic Graph Regularization for Few-Shot Learning

In many domains, relationships between categories are encoded in the kno...
research
08/13/2023

Dual Meta-Learning with Longitudinally Generalized Regularization for One-Shot Brain Tissue Segmentation Across the Human Lifespan

Brain tissue segmentation is essential for neuroscience and clinical stu...
research
04/08/2021

Support-Target Protocol for Meta-Learning

The support/query (S/Q) training protocol is widely used in meta-learnin...

Please sign up or login with your details

Forgot password? Click here to reset