Unsupervised Domain Adaptation with Feature Embeddings

12/14/2014
by   Yi Yang, et al.
0

Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches often require the specification of "pivot features" that generalize across domains, which are selected by task-specific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature template structure common in NLP problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2019

Deep causal representation learning for unsupervised domain adaptation

Studies show that the representations learned by deep neural networks ca...
research
03/10/2016

Part-of-Speech Tagging for Historical English

As more historical texts are digitized, there is interest in applying na...
research
03/12/2019

Learning Condensed and Aligned Features for Unsupervised Domain Adaptation Using Label Propagation

Unsupervised domain adaptation aiming to learn a specific task for one d...
research
10/26/2021

Towards Audio Domain Adaptation for Acoustic Scene Classification using Disentanglement Learning

The deployment of machine listening algorithms in real-life applications...
research
12/28/2020

Improving Unsupervised Domain Adaptation by Reducing Bi-level Feature Redundancy

Reducing feature redundancy has shown beneficial effects for improving t...
research
03/20/2022

Unsupervised Domain Adaptation for Nighttime Aerial Tracking

Previous advances in object tracking mostly reported on favorable illumi...
research
08/28/2019

Cross-sensor Pore Detection in High-resolution Fingerprint Images using Unsupervised Domain Adaptation

With the emergence of the high-resolution fingerprint sensors, the resea...

Please sign up or login with your details

Forgot password? Click here to reset