The Cascade Transformer: an Application for Efficient Answer Sentence Selection

05/05/2020
by   Luca Soldaini, et al.
0

Large transformer-based language models have been shown to be very effective in many classification tasks. However, their computational complexity prevents their use in applications requiring the classification of a large set of candidates. While previous works have investigated approaches to reduce model size, relatively little attention has been paid to techniques to improve batch throughput during inference. In this paper, we introduce the Cascade Transformer, a simple yet effective technique to adapt transformer-based models into a cascade of rankers. Each ranker is used to prune a subset of candidates in a batch, thus dramatically increasing throughput at inference time. Partial encodings from the transformer model are shared among rerankers, providing further speed-up. When compared to a state-of-the-art transformer model, our approach reduces computation by 37 measured on two English Question Answering datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/15/2022

Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems

Large transformer models can highly improve Answer Sentence Selection (A...
research
09/15/2021

Transformer-based Language Models for Factoid Question Answering at BioASQ9b

In this work, we describe our experiments and participating systems in t...
research
02/12/2021

Optimizing Inference Performance of Transformers on CPUs

The Transformer architecture revolutionized the field of natural languag...
research
10/26/2020

FastFormers: Highly Efficient Transformer Models for Natural Language Understanding

Transformer-based models are the state-of-the-art for Natural Language U...
research
10/19/2022

Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction

Training deep learning models can be computationally expensive. Prior wo...
research
10/16/2021

Sparse Distillation: Speeding Up Text Classification by Using Bigger Models

Distilling state-of-the-art transformer models into lightweight student ...
research
06/30/2020

BERTERS: Multimodal Representation Learning for Expert Recommendation System with Transformer

The objective of an expert recommendation system is to trace a set of ca...

Please sign up or login with your details

Forgot password? Click here to reset