Explainable Techniques for Analyzing Flow Cytometry Cell Transformers

07/27/2023
by   Florian Kowarsch, et al.
0

Explainability for Deep Learning Models is especially important for clinical applications, where decisions of automated systems have far-reaching consequences. While various post-hoc explainable methods, such as attention visualization and saliency maps, already exist for common data modalities, including natural language and images, little work has been done to adapt them to the modality of Flow CytoMetry (FCM) data. In this work, we evaluate the usage of a transformer architecture called ReluFormer that ease attention visualization as well as we propose a gradient- and an attention-based visualization technique tailored for FCM. We qualitatively evaluate the visualization techniques for cell classification and polygon regression on pediatric Acute Lymphoblastic Leukemia (ALL) FCM samples. The results outline the model's decision process and demonstrate how to utilize the proposed techniques to inspect the trained model. The gradient-based visualization not only identifies cells that are most significant for a particular prediction but also indicates the directions in the FCM feature space in which changes have the most impact on the prediction. The attention visualization provides insights on the transformer's decision process when handling FCM data. We show that different attention heads specialize by attending to different biologically meaningful sub-populations in the data, even though the model retrieved solely supervised binary classification signals during training.

READ FULL TEXT

page 2

page 7

page 11

research
05/27/2021

Towards Interpretable Attention Networks for Cervical Cancer Analysis

Recent advances in deep learning have enabled the development of automat...
research
06/29/2022

Causality for Inherently Explainable Transformers: CAT-XPLAIN

There have been several post-hoc explanation approaches developed to exp...
research
08/23/2021

Automated Identification of Cell Populations in Flow Cytometry Data with Transformers

Acute Lymphoblastic Leukemia (ALL) is the most frequent hematologic mali...
research
05/04/2023

AttentionViz: A Global View of Transformer Attention

Transformer models are revolutionizing machine learning, but their inner...
research
04/28/2020

Towards Prediction Explainability through Sparse Communication

Explainability is a topic of growing importance in NLP. In this work, we...
research
05/30/2022

Attention Flows for General Transformers

In this paper, we study the computation of how much an input token in a ...
research
02/08/2018

TSViz: Demystification of Deep Learning Models for Time-Series Analysis

This paper presents a novel framework for demystification of convolution...

Please sign up or login with your details

Forgot password? Click here to reset