Attention-based Neural Bag-of-Features Learning for Sequence Data

05/25/2020
by   Dat Thanh Tran, et al.
0

In this paper, we propose 2D-Attention (2DA), a generic attention formulation for sequence data, which acts as a complementary computation block that can detect and focus on relevant sources of information for the given learning objective. The proposed attention module is incorporated into the recently proposed Neural Bag of Feature (NBoF) model to enhance its learning capacity. Since 2DA acts as a plug-in layer, injecting it into different computation stages of the NBoF model results in different 2DA-NBoF architectures, each of which possesses a unique interpretation. We conducted extensive experiments in financial forecasting, audio analysis as well as medical diagnosis problems to benchmark the proposed formulations in comparison with existing methods, including the widely used Gated Recurrent Units. Our empirical analysis shows that the proposed attention formulations can not only improve performances of NBoF models but also make them resilient to noisy data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2022

Self-Attention Neural Bag-of-Features

In this work, we propose several attention formulations for multivariate...
research
08/06/2023

Introducing Feature Attention Module on Convolutional Neural Network for Diabetic Retinopathy Detection

Diabetic retinopathy (DR) is a leading cause of blindness among diabetic...
research
01/24/2019

Temporal Logistic Neural Bag-of-Features for Financial Time series Forecasting leveraging Limit Order Book Data

Time series forecasting is a crucial component of many important applica...
research
10/27/2022

Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy

More and more empirical and theoretical evidence shows that deepening ne...
research
11/02/2017

Audio Set classification with attention model: A probabilistic perspective

This paper investigate the classification of the Audio Set dataset. Audi...
research
11/23/2021

SimpleTron: Eliminating Softmax from Attention Computation

In this paper, we propose that the dot product pairwise matching attenti...

Please sign up or login with your details

Forgot password? Click here to reset