Self-learn to Explain Siamese Networks Robustly

09/15/2021
by   Chao Chen, et al.
25

Learning to compare two objects are essential in applications, such as digital forensics, face recognition, and brain network analysis, especially when labeled data is scarce and imbalanced. As these applications make high-stake decisions and involve societal values like fairness and transparency, it is critical to explain the learned models. We aim to study post-hoc explanations of Siamese networks (SN) widely used in learning to compare. We characterize the instability of gradient-based explanations due to the additional compared object in SN, in contrast to architectures with a single input instance. We propose an optimization framework that derives global invariance from unlabeled data using self-learning to promote the stability of local explanations tailored for specific query-reference pairs. The optimization problems can be solved using gradient descent-ascent (GDA) for constrained optimization, or SGD for KL-divergence regularized unconstrained optimization, with convergence proofs, especially when the objective functions are nonconvex due to the Siamese architecture. Quantitative results and case studies on tabular and graph data from neuroscience and chemical engineering show that the framework respects the self-learned invariance while robustly optimizing the faithfulness and simplicity of the explanation. We further demonstrate the convergence of GDA experimentally.

READ FULL TEXT
research
11/27/2020

Reflective-Net: Learning from Explanations

Humans possess a remarkable capability to make fast, intuitive decisions...
research
05/25/2023

Robust Ante-hoc Graph Explainer using Bilevel Optimization

Explaining the decisions made by machine learning models for high-stakes...
research
08/22/2018

Convergence of Cubic Regularization for Nonconvex Optimization under KL Property

Cubic-regularized Newton's method (CR) is a popular algorithm that guara...
research
06/16/2023

Gradient is All You Need?

In this paper we provide a novel analytical perspective on the theoretic...
research
12/19/2021

RELAX: Representation Learning Explainability

Despite the significant improvements that representation learning via se...
research
08/08/2022

Understanding Masked Image Modeling via Learning Occlusion Invariant Feature

Recently, Masked Image Modeling (MIM) achieves great success in self-sup...

Please sign up or login with your details

Forgot password? Click here to reset