Stochastic Adversarial Gradient Embedding for Active Domain Adaptation

12/03/2020
by   Victor Bouvier, et al.
0

Unsupervised Domain Adaptation (UDA) aims to bridge the gap between a source domain, where labelled data are available, and a target domain only represented with unlabelled data. If domain invariant representations have dramatically improved the adaptability of models, to guarantee their good transferability remains a challenging problem. This paper addresses this problem by using active learning to annotate a small budget of target data. Although this setup, called Active Domain Adaptation (ADA), deviates from UDA's standard setup, a wide range of practical applications are faced with this situation. To this purpose, we introduce Stochastic Adversarial Gradient Embedding (SAGE), a framework that makes a triple contribution to ADA. First, we select for annotation target samples that are likely to improve the representations' transferability by measuring the variation, before and after annotation, of the transferability loss gradient. Second, we increase sampling diversity by promoting different gradient directions. Third, we introduce a novel training procedure for actively incorporating target samples when learning invariant representations. SAGE is based on solid theoretical ground and validated on various UDA benchmarks against several baselines. Our empirical investigation demonstrates that SAGE takes the best of uncertainty vs diversity samplings and improves representations transferability substantially.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2021

TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) aims to transfer the knowledge lear...
research
03/17/2023

Diffusion-based Target Sampler for Unsupervised Domain Adaptation

Limited transferability hinders the performance of deep learning models ...
research
04/16/2019

Active Adversarial Domain Adaptation

We propose an active learning approach for transferring representations ...
research
06/25/2020

Target Consistency for Domain Adaptation: when Robustness meets Transferability

Learning Invariant Representations has been successfully applied for rec...
research
05/24/2020

Discriminative Active Learning for Domain Adaptation

Domain Adaptation aiming to learn a transferable feature between differe...
research
06/24/2020

Robust Domain Adaptation: Representations, Weights and Inductive Bias

Unsupervised Domain Adaptation (UDA) has attracted a lot of attention in...
research
06/23/2022

Measuring the Feasibility of Analogical Transfer using Complexity

Analogies are 4-ary relations of the form "A is to B as C is to D". Whil...

Please sign up or login with your details

Forgot password? Click here to reset