Domain Adversarial Training: A Game Perspective

02/10/2022
by   David Acuna, et al.
4

The dominant line of work in domain adaptation has focused on learning invariant representations using domain-adversarial training. In this paper, we interpret this approach from a game theoretical perspective. Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance. Our analysis leads us to replace gradient descent with high-order ODE solvers (i.e., Runge-Kutta), for which we derive asymptotic convergence guarantees. This family of optimizers is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers. Our experiments show that in conjunction with state-of-the-art domain-adversarial methods, we achieve up to 3.5 easy to implement, free of additional parameters, and can be plugged into any domain-adversarial framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2021

An SDE Framework for Adversarial Training, with Convergence and Robustness Analysis

Adversarial training has gained great popularity as one of the most effe...
research
06/16/2022

A Closer Look at Smoothness in Domain Adversarial Training

Domain adversarial training has been ubiquitous for achieving invariant ...
research
10/13/2020

Toward Few-step Adversarial Training from a Frequency Perspective

We investigate adversarial-sample generation methods from a frequency do...
research
05/16/2018

What's in a Domain? Learning Domain-Robust Text Representations using Adversarial Training

Most real world language problems require learning from heterogenous cor...
research
06/27/2023

MAT: Mixed-Strategy Game of Adversarial Training in Fine-tuning

Fine-tuning large-scale pre-trained language models has been demonstrate...
research
02/02/2019

Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal

Minmax optimization, especially in its general nonconvex-nonconcave form...
research
03/30/2020

Towards Stable and Comprehensive Domain Alignment: Max-Margin Domain-Adversarial Training

Domain adaptation tackles the problem of transferring knowledge from a l...

Please sign up or login with your details

Forgot password? Click here to reset