Learning explanations that are hard to vary

In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning. We show that averaging gradients across examples – akin to a logical OR of patterns – can favor memorization and `patchwork' solutions that sew together different strategies, instead of identifying invariances. To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled. We then propose and experimentally validate a simple alternative algorithm based on a logical AND, that focuses on invariances and prevents memorization in a set of real-world tasks. Finally, using a synthetic dataset with a clear distinction between invariant and spurious mechanisms, we dissect learning signals and compare this approach to well-established regularizers.

READ FULL TEXT

page 1

page 22

research
12/16/2020

Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience – an initial exploration

Artificial intelligence has made great strides since the deep learning r...
research
12/01/2020

Evaluating Explanations: How much do explanations from the teacher aid students?

While many methods purport to explain predictions by highlighting salien...
research
11/25/2022

Complementary Explanations for Effective In-Context Learning

Large language models (LLMs) have exhibited remarkable capabilities in l...
research
09/16/2021

Detection Accuracy for Evaluating Compositional Explanations of Units

The recent success of deep learning models in solving complex problems a...
research
05/23/2022

Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models

Current Natural Language Inference (NLI) models achieve impressive resul...
research
06/04/2021

SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of Invariances in Domain Generalization

A major bottleneck in the real-world applications of machine learning mo...
research
02/22/2023

Stress and Adaptation: Applying Anna Karenina Principle in Deep Learning for Image Classification

Image classification with deep neural networks has reached state-of-art ...

Please sign up or login with your details

Forgot password? Click here to reset