Transferred Discrepancy: Quantifying the Difference Between Representations

07/24/2020
by   Yunzhen Feng, et al.
0

Understanding what information neural networks capture is an essential problem in deep learning, and studying whether different models capture similar features is an initial step to achieve this goal. Previous works sought to define metrics over the feature matrices to measure the difference between two models. However, different metrics sometimes lead to contradictory conclusions, and there has been no consensus on which metric is suitable to use in practice. In this work, we propose a novel metric that goes beyond previous approaches. Recall that one of the most practical scenarios of using the learned representations is to apply them to downstream tasks. We argue that we should design the metric based on a similar principle. For that, we introduce the transferred discrepancy (TD), a new metric that defines the difference between two representations based on their downstream-task performance. Through an asymptotic analysis, we show how TD correlates with downstream tasks and the necessity to define metrics in such a task-dependent fashion. In particular, we also show that under specific conditions, the TD metric is closely related to previous metrics. Our experiments show that TD can provide fine-grained information for varied downstream tasks, and for the models trained from different initializations, the learned features are not the same in terms of downstream-task predictions. We find that TD may also be used to evaluate the effectiveness of different training strategies. For example, we demonstrate that the models trained with proper data augmentations that improve the generalization capture more similar features in terms of TD, while those with data augmentations that hurt the generalization will not. This suggests a training strategy that leads to more robust representation also trains models that generalize better.

READ FULL TEXT
research
11/20/2022

Towards Generalizable Graph Contrastive Learning: An Information Theory Perspective

Graph contrastive learning (GCL) emerges as the most representative appr...
research
03/23/2022

Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning

Deep metric learning (DML) enables learning with less supervision throug...
research
07/12/2021

Representation Learning for Out-Of-Distribution Generalization in Reinforcement Learning

Learning data representations that are useful for various downstream tas...
research
10/12/2022

GULP: a prediction-based metric between representations

Comparing the representations learned by different neural networks has r...
research
08/10/2023

RLSAC: Reinforcement Learning enhanced Sample Consensus for End-to-End Robust Estimation

Robust estimation is a crucial and still challenging task, which involve...
research
05/30/2020

Learning Efficient Representations of Mouse Movements to Predict User Attention

Tracking mouse cursor movements can be used to predict user attention on...
research
05/12/2016

Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning

We use Bayesian optimization to learn curricula for word representation ...

Please sign up or login with your details

Forgot password? Click here to reset