Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations

01/15/2020
by   Patrick Schramowski, et al.
24

Deep neural networks have shown excellent performances in many real-world applications such as plant phenotyping. Unfortunately, they may show "Clever Hans"-like behaviour— making use of confounding factors within datasets—to achieve high prediction rates. Rather than discarding the trained models or the dataset, we show that interactions between the learning system and the human user can correct the model. Specifically, we revise the models decision process by adding annotated masks during the learning loop and penalize decisions made for wrong reasons. In this way the decision strategies of the machine can be improved, focusing on relevant features, without considerably dropping predictive performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset