Pay Attention to MLPs

05/17/2021 ∙ by Hanxiao Liu, et al. ∙ 39

Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple attention-free network architecture, gMLP, based solely on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers. In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 14

page 15

Code Repositories

g-mlp-pytorch

Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch


view repo

g-mlp-gpt

GPT, but made only out of MLPs


view repo

mlp-gpt-jax

A GPT, made only of MLPs, in Jax


view repo

g-mlp

PyTorch implementation of Pay Attention to MLPs


view repo

g-mlp-tensorflow

A gMLP (gated MLP) implementation in Tensorflow 1.x, as described in the paper "Pay Attention to MLPs" (2105.08050).


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.