Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations

01/15/2020 ∙ by Patrick Schramowski, et al. ∙ 24

Deep neural networks have shown excellent performances in many real-world applications such as plant phenotyping. Unfortunately, they may show "Clever Hans"-like behaviour— making use of confounding factors within datasets—to achieve high prediction rates. Rather than discarding the trained models or the dataset, we show that interactions between the learning system and the human user can correct the model. Specifically, we revise the models decision process by adding annotated masks during the learning loop and penalize decisions made for wrong reasons. In this way the decision strategies of the machine can be improved, focusing on relevant features, without considerably dropping predictive performance.



There are no comments yet.


page 2

page 7

page 8

page 17

page 18

page 19

page 20

page 21

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.