Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

06/04/2021 ∙ by Jannik Kossen, et al. ∙ 19

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input. To this end, we introduce a general-purpose deep learning architecture that takes as input the entire dataset instead of processing one datapoint at a time. Our approach uses self-attention to reason about relationships between datapoints explicitly, which can be seen as realizing non-parametric models using parametric attention mechanisms. However, unlike conventional non-parametric models, we let the model learn end-to-end from the data how to make use of other datapoints for prediction. Empirically, our models solve cross-datapoint lookup and complex reasoning tasks unsolvable by traditional deep learning models. We show highly competitive results on tabular data, early results on CIFAR-10, and give insight into how the model makes use of the interactions between points.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 11

page 13

page 14

page 16

page 23

page 29

page 37

Code Repositories

non-parametric-transformers

Code for "Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.