Text-to-Text Multi-view Learning for Passage Re-ranking

04/29/2021
by   Jia-Huei Ju, et al.
0

Recently, much progress in natural language processing has been driven by deep contextualized representations pretrained on large corpora. Typically, the fine-tuning on these pretrained models for a specific downstream task is based on single-view learning, which is however inadequate as a sentence can be interpreted differently from different perspectives. Therefore, in this work, we propose a text-to-text multi-view learning framework by incorporating an additional view – the text generation view – into a typical single-view passage ranking model. Empirically, the proposed approach is of help to the ranking performance compared to its single-view counterpart. Ablation studies are also reported in the paper.

READ FULL TEXT
research
01/31/2018

Deep Multi-view Learning to Rank

We study the problem of learning to rank from multiple sources. Though m...
research
07/26/2018

Discriminative multi-view Privileged Information learning for image re-ranking

Conventional multi-view re-ranking methods usually perform asymmetrical ...
research
05/18/2018

Multi-view Sentence Representation Learning

Multi-view learning can provide self-supervision when different views ar...
research
10/02/2018

Improving Sentence Representations with Multi-view Frameworks

Multi-view learning can provide self-supervision when different views ar...
research
10/12/2022

RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

Recently, substantial progress has been made in text ranking based on pr...
research
03/11/2021

FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders

Pretrained text encoders, such as BERT, have been applied increasingly i...
research
02/14/2020

HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing

Computation-intensive pretrained models have been taking the lead of man...

Please sign up or login with your details

Forgot password? Click here to reset