Attention mechanisms and deep learning for machine vision: A survey of the state of the art

12/30/2021
by   Abdul Mueed Hafiz, et al.
0

With the advent of state of the art nature-inspired pure attention based models i.e. transformers, and their success in natural language processing (NLP), their extension to machine vision (MV) tasks was inevitable and much felt. Subsequently, vision transformers (ViTs) were introduced which are giving quite a challenge to the established deep learning based machine vision techniques. However, pure attention based models/architectures like transformers require huge data, large training times and large computational resources. Some recent works suggest that combinations of these two varied fields can prove to build systems which have the advantages of both these fields. Accordingly, this state of the art survey paper is introduced which hopefully will help readers get useful information about this interesting and potential research area. A gentle introduction to attention mechanisms is given, followed by a discussion of the popular attention based deep architectures. Subsequently, the major categories of the intersection of attention mechanisms and deep learning for machine vision (MV) based are discussed. Afterwards, the major algorithms, issues and trends within the scope of the paper are discussed.

READ FULL TEXT

page 4

page 9

research
03/08/2023

HyT-NAS: Hybrid Transformers Neural Architecture Search for Edge Devices

Vision Transformers have enabled recent attention-based Deep Learning (D...
research
04/26/2022

A survey on attention mechanisms for medical applications: are we moving towards better algorithms?

The increasing popularity of attention mechanisms in deep learning algor...
research
08/09/2022

Attention Hijacking in Trojan Transformers

Trojan attacks pose a severe threat to AI systems. Recent works on Trans...
research
08/17/2022

Transformer Vs. MLP-Mixer Exponential Expressive Gap For NLP Problems

Vision-Transformers are widely used in various vision tasks. Meanwhile, ...
research
09/28/2021

Fine-tuning Vision Transformers for the Prediction of State Variables in Ising Models

Transformers are state-of-the-art deep learning models that are composed...
research
02/15/2022

The Quarks of Attention

Attention plays a fundamental role in both natural and artificial intell...
research
04/07/2023

Attention: Marginal Probability is All You Need?

Attention mechanisms are a central property of cognitive systems allowin...

Please sign up or login with your details

Forgot password? Click here to reset