Learning Explainable Models Using Attribution Priors

06/25/2019 ∙ by Gabriel Erion, et al. ∙ 10

Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by constraining the model's parameters; Model attributions transfer information from a model to humans by explaining the model's behavior. We propose connecting these topics with attribution priors (https://github.com/suinleelab/attributionpriors), which allow humans to use the common language of attributions to enforce prior expectations about a model's behavior during training. We develop a differentiable axiomatic feature attribution method called expected gradients and show how to directly regularize these attributions during training. We demonstrate the broad applicability of attribution priors (Ω) by presenting three distinct examples that regularize models to behave more intuitively in three different domains: 1) on image data, Ω_pixel encourages models to have piecewise smooth attribution maps; 2) on gene expression data, Ω_graph encourages models to treat functionally related genes similarly; 3) on a health care dataset, Ω_sparse encourages models to rely on fewer features. In all three domains, attribution priors produce models with more intuitive behavior and better generalization performance by encoding constraints that would otherwise be very difficult to encode using standard model priors.



There are no comments yet.


page 7

page 16

page 20

page 21

page 22

page 23

Code Repositories


A repository for explaining feature attributions and feature interactions in deep neural networks.

view repo


Tools for training explainable models using attribution priors.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.