Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT 2019 Efficiency Task

10/16/2019
by   Kenton Murray, et al.
0

This paper describes the Notre Dame Natural Language Processing Group's (NDNLP) submission to the WNGT 2019 shared task (Hayashi et al., 2019). We investigated the impact of auto-sizing (Murray and Chiang, 2015; Murray et al., 2019) to the Transformer network (Vaswani et al., 2017) with the goal of substantially reducing the number of parameters in the model. Our method was able to eliminate more than 25 decrease of only 1.1 BLEU.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/17/2021

The Human Evaluation Datasheet 1.0: A Template for Recording Details of Human Evaluation Experiments in NLP

This paper introduces the Human Evaluation Datasheet, a template for rec...
research
02/27/2020

A Primer in BERTology: What we know about how BERT works

Transformer-based models are now widely used in NLP, but we still do not...
research
01/19/2021

Learning Outcome Oriented Programmatic Assessment

This paper describes considerations behind the organisation of a third s...
research
08/23/2017

More declarative tabling in Prolog using multi-prompt delimited control

Several Prolog implementations include a facility for tabling, an altern...
research
08/30/2022

Towards making the most of NLP-based device mapping optimization for OpenCL kernels

Nowadays, we are living in an era of extreme device heterogeneity. Despi...
research
04/07/2023

Applicable Methodologies for the Mass Transfer Phenomenon in Tumble Dryers: A Review

Tumble dryers offer a fast and convenient way of drying textiles indepen...

Please sign up or login with your details

Forgot password? Click here to reset