Machine Learning for Brain Disorders: Transformers and Visual Transformers

03/21/2023
by   Robin Courant, et al.
0

Transformers were initially introduced for natural language processing (NLP) tasks, but fast they were adopted by most deep learning fields, including computer vision. They measure the relationships between pairs of input tokens (words in the case of text strings, parts of images for visual Transformers), termed attention. The cost is exponential with the number of tokens. For image classification, the most common Transformer Architecture uses only the Transformer Encoder in order to transform the various input tokens. However, there are also numerous other applications in which the decoder part of the traditional Transformer Architecture is also used. Here, we first introduce the Attention mechanism (Section 1), and then the Basic Transformer Block including the Vision Transformer (Section 2). Next, we discuss some improvements of visual Transformers to account for small datasets or less computation(Section 3). Finally, we introduce Visual Transformers applied to tasks other than image classification, such as detection, segmentation, generation and training without labels (Section 4) and other domains, such as video or multimodality using text or audio data (Section 5).

READ FULL TEXT

page 14

page 18

page 19

page 22

page 23

page 24

research
03/24/2022

Transformers Meet Visual Learning Understanding: A Comprehensive Review

Dynamic attention mechanism and global modeling ability make Transformer...
research
11/22/2021

DBIA: Data-free Backdoor Injection Attack against Transformer Networks

Recently, transformer architecture has demonstrated its significance in ...
research
06/05/2020

Visual Transformers: Token-based Image Representation and Processing for Computer Vision

Computer vision has achieved great success using standardized image repr...
research
03/17/2022

SepTr: Separable Transformer for Audio Spectrogram Processing

Following the successful application of vision transformers in multiple ...
research
03/14/2023

Input-length-shortening and text generation via attention values

Identifying words that impact a task's performance more than others is a...
research
05/30/2022

Attention Flows for General Transformers

In this paper, we study the computation of how much an input token in a ...
research
09/07/2022

Visual Transformer for Soil Classification

Our food security is built on the foundation of soil. Farmers would be u...

Please sign up or login with your details

Forgot password? Click here to reset