-
Adversarial Variational Domain Adaptation
In this work we address the problem of transferring knowledge obtained f...
read it
-
Multi-Domain Adversarial Learning
Multi-domain learning (MDL) aims at obtaining a model with minimal avera...
read it
-
A Structured Variational Autoencoder for Contextual Morphological Inflection
Statistical morphological inflectors are typically trained on fully supe...
read it
-
Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction
Labeled sequence transduction is a task of transforming one sequence int...
read it
-
Learning Disentangled Semantic Representation for Domain Adaptation
Domain adaptation is an important but challenging task. Most of the exis...
read it
-
Semi-supervised Gated Recurrent Neural Networks for Robotic Terrain Classification
Legged robots are popular candidates for missions in challenging terrain...
read it
-
A Cross-Sentence Latent Variable Model for Semi-Supervised Text Sequence Matching
We present a latent variable model for predicting the relationship betwe...
read it
Semi-supervised Stochastic Multi-Domain Learning using Variational Inference
Supervised models of NLP rely on large collections of text which closely resemble the intended testing setting. Unfortunately matching text is often not available in sufficient quantity, and moreover, within any domain of text, data is often highly heterogenous. In this paper we propose a method to distill the important domain signal as part of a multi-domain learning system, using a latent variable model in which parts of a neural model are stochastically gated based on the inferred domain. We compare the use of discrete versus continuous latent variables, operating in a domain-supervised or a domain semi-supervised setting, where the domain is known only for a subset of training inputs. We show that our model leads to substantial performance improvements over competitive benchmark domain adaptation methods, including methods using adversarial learning.
READ FULL TEXT
Comments
There are no comments yet.