HDformer: A Higher Dimensional Transformer for Diabetes Detection Utilizing Long Range Vascular Signals

03/17/2023
by   Ella Lan, et al.
0

Diabetes mellitus is a worldwide concern, and early detection can help to prevent serious complications. Low-cost, non-invasive detection methods, which take cardiovascular signals into deep learning models, have emerged. However, limited accuracy constrains their clinical usage. In this paper, we present a new Transformer-based architecture, Higher Dimensional Transformer (HDformer), which takes long-range photoplethysmography (PPG) signals to detect diabetes. The long-range PPG contains broader and deeper signal contextual information compared to the less-than-one-minute PPG signals commonly utilized in existing research. To increase the capability and efficiency of processing the long range data, we propose a new attention module Time Square Attention (TSA), reducing the volume of the tokens by more than 10x, while retaining the local/global dependencies. It converts the 1-dimensional inputs into 2-dimensional representations and groups adjacent points into a single 2D token, using the 2D Transformer models as the backbone of the encoder. It generates the dynamic patch sizes into a gated mixture-of-experts (MoE) network as decoder, which optimizes the learning on different attention areas. Extensive experimentations show that HDformer results in the state-of-the-art performance (sensitivity 98.4, accuracy 97.3, specificity 92.8, and AUC 0.929) on the standard MIMIC-III dataset, surpassing existing studies. This work is the first time to take long-range, non-invasive PPG signals via Transformer for diabetes detection, achieving a more scalable and convenient solution compared to traditional invasive approaches. The proposed HDformer can also be scaled to analyze general long-range biomedical waveforms. A wearable prototype finger-ring is designed as a proof of concept.

READ FULL TEXT
research
11/11/2022

Token Transformer: Can class token help window-based transformer build better long-range interactions?

Compared with the vanilla transformer, the window-based transformer offe...
research
07/07/2020

Do Transformers Need Deep Long-Range Memory

Deep attention models have advanced the modelling of sequential data acr...
research
12/09/2021

DualFormer: Local-Global Stratified Transformer for Efficient Video Recognition

While transformers have shown great potential on video recognition tasks...
research
09/07/2022

Spach Transformer: Spatial and Channel-wise Transformer Based on Local and Global Self-attentions for PET Image Denoising

Position emission tomography (PET) is widely used in clinics and researc...
research
05/08/2023

Smart Home Device Detection Algorithm Based on FSA-YOLOv5

Smart home device detection is a critical aspect of human-computer inter...
research
07/06/2022

Astroconformer: Inferring Surface Gravity of Stars from Stellar Light Curves with Transformer

We introduce Astroconformer, a Transformer-based model to analyze stella...
research
12/12/2019

Reliable Wide-Area Backscatter via Channel Polarization

A long-standing vision of backscatter communications is to provide long-...

Please sign up or login with your details

Forgot password? Click here to reset