DeepAI AI Chat
Log In Sign Up

Adaptive Attention Span in Computer Vision

04/18/2020
by   Jerrod Parker, et al.
UNIVERSITY OF TORONTO
0

Recent developments in Transformers for language modeling have opened new areas of research in computer vision. Results from late 2019 showed vast performance increases in both object detection and recognition when convolutions are replaced by local self-attention kernels. Models using local self-attention kernels were also shown to have less parameters and FLOPS compared to equivalent architectures that only use convolutions. In this work we propose a novel method for learning the local self-attention kernel size. We then compare its performance to fixed-size local attention and convolution kernels. The code for all our experiments and models is available at https://github.com/JoeRoussy/adaptive-attention-in-cv

READ FULL TEXT

page 1

page 2

page 3

page 4

11/14/2021

Local Multi-Head Channel Self-Attention for Facial Expression Recognition

Since the Transformer architecture was introduced in 2017 there has been...
02/14/2020

Electricity Theft Detection with self-attention

In this work we propose a novel self-attention mechanism model to addres...
05/20/2022

KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation

Relative positional embeddings (RPE) have received considerable attentio...
06/20/2018

Novel Convolution Kernels for Computer Vision and Shape Analysis based on Electromagnetism

Computer vision is a growing field with a lot of new applications in aut...
12/03/2019

Multiscale Self Attentive Convolutions for Vision and Language Modeling

Self attention mechanisms have become a key building block in many state...
11/15/2021

Searching for TrioNet: Combining Convolution with Local and Global Self-Attention

Recently, self-attention operators have shown superior performance as a ...
11/29/2021

On the Integration of Self-Attention and Convolution

Convolution and self-attention are two powerful techniques for represent...