DeepAI AI Chat
Log In Sign Up

Scientific Discovery by Generating Counterfactuals using Image Translation

Model explanation techniques play a critical role in understanding the source of a model's performance and making its decisions transparent. Here we investigate if explanation techniques can also be used as a mechanism for scientific discovery. We make three contributions: first, we propose a framework to convert predictions from explanation techniques to a mechanism of discovery. Second, we show how generative models in combination with black-box predictors can be used to generate hypotheses (without human priors) that can be critically examined. Third, with these techniques we study classification models for retinal images predicting Diabetic Macular Edema (DME), where recent work showed that a CNN trained on these images is likely learning novel features in the image. We demonstrate that the proposed framework is able to explain the underlying scientific mechanism, thus bridging the gap between the model's performance and human understanding.


page 5

page 6

page 8

page 11

page 12


GT4SD: Generative Toolkit for Scientific Discovery

With the growing availability of data within various scientific domains,...

Embedding Deep Networks into Visual Explanations

In this paper, we propose a novel explanation module to explain the pred...

Explainable Analysis of Deep Learning Methods for SAR Image Classification

Deep learning methods exhibit outstanding performance in synthetic apert...

Leveraging Conditional Generative Models in a General Explanation Framework of Classifier Decisions

Providing a human-understandable explanation of classifiers' decisions h...

How to Manipulate CNNs to Make Them Lie: the GradCAM Case

Recently many methods have been introduced to explain CNN decisions. How...

A critical assessment of conformal prediction methods applied in binary classification settings

In recent years there has been an increase in the number of scientific p...