Generalization Beyond Feature Alignment: Concept Activation-Guided Contrastive Learning

11/13/2022
by   Yibing Liu, et al.
0

Learning invariant representations via contrastive learning has seen state-of-the-art performance in domain generalization (DG). Despite such success, in this paper, we find that its core learning strategy – feature alignment – could heavily hinder the model generalization. Inspired by the recent progress in neuron interpretability, we characterize this problem from a neuron activation view. Specifically, by treating feature elements as neuron activation states, we show that conventional alignment methods tend to deteriorate the diversity of learned invariant features, as they indiscriminately minimize all neuron activation differences. This instead ignores rich relations among neurons – many of them often identify the same visual concepts though they emerge differently. With this finding, we present a simple yet effective approach, Concept Contrast (CoCo), which relaxes element-wise feature alignments by contrasting high-level concepts encoded in neurons. This approach is highly flexible and can be integrated into any contrastive method in DG. Through extensive experiments, we further demonstrate that our CoCo promotes the diversity of feature representations, and consistently improves model generalization capability over the DomainBed benchmark.

READ FULL TEXT

page 2

page 8

research
06/05/2023

Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization

The out-of-distribution (OOD) problem generally arises when neural netwo...
research
08/22/2022

Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis

Graph neural networks (GNNs) are highly effective on a variety of graph-...
research
07/25/2022

Domain-invariant Feature Exploration for Domain Generalization

Deep learning has achieved great success in the past few years. However,...
research
04/19/2023

Disentangling Neuron Representations with Concept Vectors

Mechanistic interpretability aims to understand how models store represe...
research
10/28/2018

Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

It is widely believed that learning good representations is one of the m...
research
03/10/2023

Neuron Structure Modeling for Generalizable Remote Physiological Measurement

Remote photoplethysmography (rPPG) technology has drawn increasing atten...
research
05/21/2021

Condition Integration Memory Network: An Interpretation of the Meaning of the Neuronal Design

This document introduces a hypothesized framework on the functional natu...

Please sign up or login with your details

Forgot password? Click here to reset