Reservoir Transformer

12/30/2020
by   Sheng Shen, et al.
23

We demonstrate that transformers obtain impressive performance even when some of the layers are randomly initialized and never updated. Inspired by old and well-established ideas in machine learning, we explore a variety of non-linear "reservoir" layers interspersed with regular transformer layers, and show improvements in wall-clock compute time until convergence, as well as overall performance, on various machine translation and (masked) language modelling tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset