-
i-Mix: A Strategy for Regularizing Contrastive Representation Learning
Contrastive representation learning has shown to be an effective way of ...
read it
-
Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding
While deep neural networks (DNNs) are being increasingly used to make pr...
read it
-
Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images
As AI-based medical devices are becoming more common in imaging fields l...
read it
-
Why X rather than Y? Explaining Neural Model' Predictions by Generating Intervention Counterfactual Samples
Even though the topic of explainable AI/ML is very popular in text and c...
read it
-
A Simple Framework for Uncertainty in Contrastive Learning
Contrastive approaches to representation learning have recently shown gr...
read it
-
Contrastive Learning Meets Transfer Learning: A Case Study In Medical Image Analysis
Annotated medical images are typically rarer than labeled natural images...
read it
-
Improved Conditional Flow Models for Molecule to Image Synthesis
In this paper, we aim to synthesize cell microscopy images under differe...
read it
Proactive Pseudo-Intervention: Causally Informed Contrastive Learning For Interpretable Vision Models
Deep neural networks have shown significant promise in comprehending complex visual signals, delivering performance on par or even superior to that of human experts. However, these models often lack a mechanism for interpreting their predictions, and in some cases, particularly when the sample size is small, existing deep learning solutions tend to capture spurious correlations that compromise model generalizability on unseen inputs. In this work, we propose a contrastive causal representation learning strategy that leverages proactive interventions to identify causally-relevant image features, called Proactive Pseudo-Intervention (PPI). This approach is complemented with a causal salience map visualization module, i.e., Weight Back Propagation (WBP), that identifies important pixels in the raw input image, which greatly facilitates the interpretability of predictions. To validate its utility, our model is benchmarked extensively on both standard natural images and challenging medical image datasets. We show this new contrastive causal representation learning model consistently improves model performance relative to competing solutions, particularly for out-of-domain predictions or when dealing with data integration from heterogeneous sources. Further, our causal saliency maps are more succinct and meaningful relative to their non-causal counterparts.
READ FULL TEXT
Comments
There are no comments yet.