Multi-step domain adaptation by adversarial attack to ℋ Δℋ-divergence

07/18/2022
by   Arip Asadulaev, et al.
0

Adversarial examples are transferable between different models. In our paper, we propose to use this property for multi-step domain adaptation. In unsupervised domain adaptation settings, we demonstrate that replacing the source domain with adversarial examples to ℋΔℋ-divergence can improve source classifier accuracy on the target domain. Our method can be connected to most domain adaptation techniques. We conducted a range of experiments and achieved improvement in accuracy on Digits and Office-Home datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2020

Mutual Learning Network for Multi-Source Domain Adaptation

Early Unsupervised Domain Adaptation (UDA) methods have mostly assumed t...
research
04/27/2020

Maximum Density Divergence for Domain Adaptation

Unsupervised domain adaptation addresses the problem of transferring kno...
research
02/20/2020

Multi-step Online Unsupervised Domain Adaptation

In this paper, we address the Online Unsupervised Domain Adaptation (OUD...
research
01/28/2022

Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers

We present a novel multiple-source unsupervised model for text classific...
research
05/28/2020

Unsupervised learning of multimodal image registration using domain adaptation with projected Earth Move's discrepancies

Multimodal image registration is a very challenging problem for deep lea...
research
12/26/2021

FRuDA: Framework for Distributed Adversarial Domain Adaptation

Breakthroughs in unsupervised domain adaptation (uDA) can help in adapti...
research
02/09/2020

Domain Adaptation As a Problem of Inference on Graphical Models

This paper is concerned with data-driven unsupervised domain adaptation,...

Please sign up or login with your details

Forgot password? Click here to reset