On Interpretability of Artificial Neural Networks

01/08/2020 ∙ by Fenglei Fan, et al. ∙ 76

Deep learning has achieved great successes in many important areas to dealing with text, images, video, graphs, and so on. However, the black-box nature of deep artificial neural networks has become the primary obstacle to their public acceptance and wide popularity in critical applications such as diagnosis and therapy. Due to the huge potential of deep learning, interpreting neural networks has become one of the most critical research directions. In this paper, we systematically review recent studies in understanding the mechanism of neural networks and shed light on some future directions of interpretability research (This work is still in progress).

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 6

page 7

page 10

page 11

page 13

page 14

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.