Warp: a method for neural network interpretability applied to gene expression profiles

08/16/2017
by   Trofimov Assya, et al.
0

We show a proof of principle for warping, a method to interpret the inner working of neural networks in the context of gene expression analysis. Warping is an efficient way to gain insight to the inner workings of neural nets and make them more interpretable. We demonstrate the ability of warping to recover meaningful information for a given class on a samplespecific individual basis. We found warping works well in both linearly and nonlinearly separable datasets. These encouraging results show that warping has a potential to be the answer to neural networks interpretability in computational biology.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2022

A Stochastic Process Model for Time Warping Functions

Time warping function provides a mathematical representation to measure ...
research
08/29/2022

Attention-based Interpretable Regression of Gene Expression in Histology

Interpretability of deep learning is widely used to evaluate the reliabi...
research
12/30/2019

Biophysical models of cis-regulation as interpretable neural networks

The adoption of deep learning techniques in genomics has been hindered b...
research
12/24/2019

Towards Multicellular Biological Deep Neural Nets Based on Transcriptional Regulation

Artificial neurons built on synthetic gene networks have potential appli...
research
03/16/2023

Unsupervised Facial Expression Representation Learning with Contrastive Local Warping

This paper investigates unsupervised representation learning for facial ...
research
06/18/2020

Sparse Bottleneck Networks for Exploratory Analysis and Visualization of Neural Patch-seq Data

In recent years, increasingly large datasets with two different sets of ...

Please sign up or login with your details

Forgot password? Click here to reset