DeepAI AI Chat
Log In Sign Up

Self-learn to Explain Siamese Networks Robustly

09/15/2021
by   Chao Chen, et al.
Beijing University of Posts and Telecommunications
Lehigh University
Intel
Worcester Polytechnic Institute
25

Learning to compare two objects are essential in applications, such as digital forensics, face recognition, and brain network analysis, especially when labeled data is scarce and imbalanced. As these applications make high-stake decisions and involve societal values like fairness and transparency, it is critical to explain the learned models. We aim to study post-hoc explanations of Siamese networks (SN) widely used in learning to compare. We characterize the instability of gradient-based explanations due to the additional compared object in SN, in contrast to architectures with a single input instance. We propose an optimization framework that derives global invariance from unlabeled data using self-learning to promote the stability of local explanations tailored for specific query-reference pairs. The optimization problems can be solved using gradient descent-ascent (GDA) for constrained optimization, or SGD for KL-divergence regularized unconstrained optimization, with convergence proofs, especially when the objective functions are nonconvex due to the Siamese architecture. Quantitative results and case studies on tabular and graph data from neuroscience and chemical engineering show that the framework respects the self-learned invariance while robustly optimizing the faithfulness and simplicity of the explanation. We further demonstrate the convergence of GDA experimentally.

READ FULL TEXT
11/27/2020

Reflective-Net: Learning from Explanations

Humans possess a remarkable capability to make fast, intuitive decisions...
05/25/2023

Robust Ante-hoc Graph Explainer using Bilevel Optimization

Explaining the decisions made by machine learning models for high-stakes...
02/03/2019

Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions

Stochastic gradient descent (SGD) is a popular and efficient method with...
08/22/2018

Convergence of Cubic Regularization for Nonconvex Optimization under KL Property

Cubic-regularized Newton's method (CR) is a popular algorithm that guara...
12/19/2021

RELAX: Representation Learning Explainability

Despite the significant improvements that representation learning via se...
08/08/2022

Understanding Masked Image Modeling via Learning Occlusion Invariant Feature

Recently, Masked Image Modeling (MIM) achieves great success in self-sup...