On the Versatile Uses of Partial Distance Correlation in Deep Learning

07/20/2022
by   Xingjian Zhen, et al.
0

Comparing the functional behavior of neural network models, whether it is a single network over time or two (or more networks) during or post-training, is an essential step in understanding what they are learning (and what they are not), and for identifying strategies for regularization or efficiency improvements. Despite recent progress, e.g., comparing vision transformers to CNNs, systematic comparison of function, especially across different networks, remains difficult and is often carried out layer by layer. Approaches such as canonical correlation analysis (CCA) are applicable in principle, but have been sparingly used so far. In this paper, we revisit a (less widely known) from statistics, called distance correlation (and its partial variant), designed to evaluate correlation between feature spaces of different dimensions. We describe the steps necessary to carry out its deployment for large scale models – this opens the door to a surprising array of applications ranging from conditioning one deep model w.r.t. another, learning disentangled representations as well as optimizing diverse models that would directly be more robust to adversarial attacks. Our experiments suggest a versatile regularizer (or constraint) with many advantages, which avoids some of the common difficulties one faces in such analyses. Code is at https://github.com/zhenxingjian/Partial_Distance_Correlation.

READ FULL TEXT

page 13

page 26

page 27

page 29

page 30

page 31

page 33

page 34

research
06/19/2017

SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability

We propose a new technique, Singular Vector Canonical Correlation Analys...
research
03/23/2022

Dynamically-Scaled Deep Canonical Correlation Analysis

Canonical Correlation Analysis (CCA) is a method for feature extraction ...
research
06/16/2023

Group Orthogonalization Regularization For Vision Models Adaptation and Robustness

As neural networks become deeper, the redundancy within their parameters...
research
11/10/2021

Are Transformers More Robust Than CNNs?

Transformer emerges as a powerful tool for visual recognition. In additi...
research
06/23/2021

Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition

In this paper, we present Vision Permutator, a conceptually simple and d...
research
04/02/2017

Identifying networks with common organizational principles

Many complex systems can be represented as networks, and the problem of ...
research
09/23/2021

DeepRare: Generic Unsupervised Visual Attention Models

Human visual system is modeled in engineering field providing feature-en...

Please sign up or login with your details

Forgot password? Click here to reset