Representation Reliability and Its Impact on Downstream Tasks

05/31/2023
by   Young-Jin Park, et al.
0

Self-supervised pre-trained models extract general-purpose representations from data, and quantifying how reliable they are is crucial because many downstream models use these representations as input for their own tasks. To this end, we first introduce a formal definition of representation reliability: the representation for a given test input is considered to be reliable if the downstream models built on top of that representation can consistently generate accurate predictions for that test point. It is desired to estimate the representation reliability without knowing the downstream tasks a priori. We provide a negative result showing that existing frameworks for uncertainty quantification in supervised learning are not suitable for this purpose. As an alternative, we propose an ensemble-based method for quantifying representation reliability, based on the concept of neighborhood consistency in the representation spaces across various pre-trained models. More specifically, the key insight is to use shared neighboring points as anchors to align different representation spaces. We demonstrate through comprehensive numerical experiments that our method is capable of predicting representation reliability with high accuracy.

READ FULL TEXT
research
08/07/2022

How Adversarial Robustness Transfers from Pre-training to Downstream Tasks

Given the rise of large-scale training regimes, adapting pre-trained mod...
research
06/11/2022

Investigation of Ensemble features of Self-Supervised Pretrained Models for Automatic Speech Recognition

Self-supervised learning (SSL) based models have been shown to generate ...
research
10/20/2022

Self-Supervised Learning via Maximum Entropy Coding

A mainstream type of current self-supervised learning methods pursues a ...
research
04/20/2023

Learning Sample Difficulty from Pre-trained Models for Reliable Prediction

Large-scale pre-trained models have achieved remarkable success in a var...
research
11/08/2022

Comparative layer-wise analysis of self-supervised speech models

Many self-supervised speech models, varying in their pre-training object...
research
10/06/2021

Semantic Prediction: Which One Should Come First, Recognition or Prediction?

The ultimate goal of video prediction is not forecasting future pixel-va...
research
02/19/2023

Evaluating Representations with Readout Model Switching

Although much of the success of Deep Learning builds on learning good re...

Please sign up or login with your details

Forgot password? Click here to reset