Pay Attention to MLPs

05/17/2021
by   Hanxiao Liu, et al.
39

Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple attention-free network architecture, gMLP, based solely on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers. In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute.

READ FULL TEXT

page 5

page 14

page 15

04/26/2022

Understanding The Robustness in Vision Transformers

Recent studies show that Vision Transformers(ViTs) exhibit strong robust...
06/01/2022

Fair Comparison between Efficient Attentions

Transformers have been successfully used in various fields and are becom...
06/30/2020

Data Movement Is All You Need: A Case Study on Optimizing Transformers

Transformers have become widely used for language modeling and sequence ...
06/09/2022

Unveiling Transformers with LEGO: a synthetic reasoning task

We propose a synthetic task, LEGO (Learning Equality and Group Operation...
06/09/2021

CoAtNet: Marrying Convolution and Attention for All Data Sizes

Transformers have attracted increasing interests in computer vision, but...
07/19/2022

Formal Algorithms for Transformers

This document aims to be a self-contained, mathematically precise overvi...
04/28/2022

A Probabilistic Interpretation of Transformers

We propose a probabilistic interpretation of exponential dot product att...

Code Repositories

g-mlp-pytorch

Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch


view repo

g-mlp-gpt

GPT, but made only out of MLPs


view repo

mlp-gpt-jax

A GPT, made only of MLPs, in Jax


view repo

g-mlp

PyTorch implementation of Pay Attention to MLPs


view repo

g-mlp-tensorflow

A gMLP (gated MLP) implementation in Tensorflow 1.x, as described in the paper "Pay Attention to MLPs" (2105.08050).


view repo