Improving Adversarial Discriminative Domain Adaptation

09/10/2018
by   Aaron Chadha, et al.
0

Adversarial discriminative domain adaptation (ADDA) is an efficient framework for unsupervised domain adaptation, where the source and target domains are assumed to have the same classes, but no labels are available for the target domain. While ADDA has already achieved significant training efficiency and competitive accuracy in comparison to generative adversarial networks, we investigate whether we can allow for further improvements in its convergence properties by incorporating source label knowledge during target domain training. To achieve this, our approach first modifies the discriminator output to jointly predict the source labels and distinguish inputs from the target domain. We then leverage on the various source/target and encoder/discriminator distribution combinations to propose two loss functions for adversarial training of the target encoder. Our final design minimizes the maximum mean discrepancy between source encoder and target discriminator distributions, which ties together adversarial and discrepancy-based loss functions that are frequently considered independently in recent deep learning domain adaptation methods. Beyond validating our framework on standard datasets like MNIST, MNIST-M, USPS and SVHN, we introduce and evaluate on a neuromorphic vision sensing (NVS) sign language recognition dataset, where the source domain constitutes emulated neuromorphic spike events converted from APS video and the target domain is experimental spike events from an NVS camera. Our results on all datasets show that our proposal is both simple and efficient, as it competes or outperforms the state-of-the-art in unsupervised domain adaptation.

READ FULL TEXT
research
04/28/2021

Preserving Semantic Consistency in Unsupervised Domain Adaptation Using Generative Adversarial Networks

Unsupervised domain adaptation seeks to mitigate the distribution discre...
research
06/08/2019

Attending to Discriminative Certainty for Domain Adaptation

In this paper, we aim to solve for unsupervised domain adaptation of cla...
research
10/08/2020

Distributionally Robust Learning for Unsupervised Domain Adaptation

We propose a distributionally robust learning (DRL) method for unsupervi...
research
04/19/2023

CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for Unsupervised Domain Adaptation

We propose a new technique called CHATTY: Coupled Holistic Adversarial T...
research
08/16/2018

Conceptual Domain Adaptation Using Deep Learning

Deep learning has recently been shown to be instrumental in the problem ...
research
11/20/2022

Turning Silver into Gold: Domain Adaptation with Noisy Labels for Wearable Cardio-Respiratory Fitness Prediction

Deep learning models have shown great promise in various healthcare appl...
research
09/19/2022

SJTU-AISPEECH System for VoxCeleb Speaker Recognition Challenge 2022

This report describes the SJTU-AISPEECH system for the Voxceleb Speaker ...

Please sign up or login with your details

Forgot password? Click here to reset