Multi-view Sentence Representation Learning

05/18/2018
by   Shuai Tang, et al.
0

Multi-view learning can provide self-supervision when different views are available of the same data. The distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora. Motivated by the asymmetry in the two hemispheres of the human brain as well as the observation that different learning architectures tend to emphasise different aspects of sentence meaning, we create a unified multi-view sentence representation learning framework, in which, one view encodes the input sentence with a Recurrent Neural Network (RNN), and the other view encodes it with a simple linear model, and the training objective is to maximise the agreement specified by the adjacent context information between two views. We show that, after training, the vectors produced from our multi-view training provide improved representations over the single-view training, and the combination of different views gives further representational improvement and demonstrates solid transferability on standard downstream tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2018

Improving Sentence Representations with Multi-view Frameworks

Multi-view learning can provide self-supervision when different views ar...
research
04/07/2022

mulEEG: A Multi-View Representation Learning on EEG Signals

Modeling effective representations using multiple views that positively ...
research
05/12/2020

Generalized Multi-view Shared Subspace Learning using View Bootstrapping

A key objective in multi-view learning is to model the information commo...
research
04/29/2021

Text-to-Text Multi-view Learning for Passage Re-ranking

Recently, much progress in natural language processing has been driven b...
research
05/20/2020

M2GRL: A Multi-task Multi-view Graph Representation Learning Framework for Web-scale Recommender Systems

Combining graph representation learning with multi-view data (side infor...
research
10/28/2021

Residual Relaxation for Multi-view Representation Learning

Multi-view methods learn representations by aligning multiple views of t...

Please sign up or login with your details

Forgot password? Click here to reset