Learning Disentangled Representations of Negation and Uncertainty

04/01/2022
by   Jake Vasilakes, et al.
0

Negation and uncertainty modeling are long-standing tasks in natural language processing. Linguistic theory postulates that expressions of negation and uncertainty are semantically independent from each other and the content they modify. However, previous works on representation learning do not explicitly model this independence. We therefore attempt to disentangle the representations of negation, uncertainty, and content using a Variational Autoencoder. We find that simply supervising the latent representations results in good disentanglement, but auxiliary objectives based on adversarial learning and mutual information minimization can provide additional disentanglement gains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2021

Contrastively Disentangled Sequential Variational Autoencoder

Self-supervised disentangled representation learning is a critical task ...
research
08/06/2022

HSIC-InfoGAN: Learning Unsupervised Disentangled Representations by Maximising Approximated Mutual Information

Learning disentangled representations requires either supervision or the...
research
11/25/2019

Bridging Disentanglement with Independence and Conditional Independence via Mutual Information for Representation Learning

Existing works on disentangled representation learning usually lie on a ...
research
02/27/2022

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

In this paper, we propose a variational autoencoder with disentanglement...
research
05/17/2021

Disentangled Variational Information Bottleneck for Multiview Representation Learning

Multiview data contain information from multiple modalities and have pot...
research
08/20/2018

Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies

Intelligent behaviour in the real-world requires the ability to acquire ...
research
12/04/2021

Representation Learning for Conversational Data using Discourse Mutual Information Maximization

Although many pretrained models exist for text or images, there have bee...

Please sign up or login with your details

Forgot password? Click here to reset