Biophysical models of cis-regulation as interpretable neural networks

12/30/2019
by   Ammar Tareen, et al.
0

The adoption of deep learning techniques in genomics has been hindered by the difficulty of mechanistically interpreting the models that these techniques produce. In recent years, a variety of post-hoc attribution methods have been proposed for addressing this neural network interpretability problem in the context of gene regulation. Here we describe a complementary way of approaching this problem. Our strategy is based on the observation that two large classes of biophysical models of cis-regulatory mechanisms can be expressed as deep neural networks in which nodes and weights have explicit physiochemical interpretations. We also demonstrate how such biophysical networks can be rapidly inferred, using modern deep learning frameworks, from the data produced by certain types of massively parallel reporter assays (MPRAs). These results suggest a scalable strategy for using MPRAs to systematically characterize the biophysical basis of gene regulation in a wide range of biological contexts. They also highlight gene regulation as a promising venue for the development of scientifically interpretable approaches to deep learning.

READ FULL TEXT
research
06/03/2019

Incorporating Biological Knowledge with Factor Graph Neural Network for Interpretable Deep Learning

While deep learning has achieved great success in many fields, one commo...
research
12/01/2018

Rank Projection Trees for Multilevel Neural Network Interpretation

A variety of methods have been proposed for interpreting nodes in deep n...
research
08/26/2020

Making Neural Networks Interpretable with Attribution: Application to Implicit Signals Prediction

Explaining recommendations enables users to understand whether recommend...
research
08/16/2017

Warp: a method for neural network interpretability applied to gene expression profiles

We show a proof of principle for warping, a method to interpret the inne...
research
09/15/2018

Towards Better Interpretability in Deep Q-Networks

Deep reinforcement learning techniques have demonstrated superior perfor...
research
03/24/2022

Interpretability of Neural Network With Physiological Mechanisms

Deep learning continues to play as a powerful state-of-art technique tha...
research
06/18/2020

Sparse Bottleneck Networks for Exploratory Analysis and Visualization of Neural Patch-seq Data

In recent years, increasingly large datasets with two different sets of ...

Please sign up or login with your details

Forgot password? Click here to reset