LLM2Loss: Leveraging Language Models for Explainable Model Diagnostics

05/04/2023
by   Shervin Ardeshir, et al.
0

Trained on a vast amount of data, Large Language models (LLMs) have achieved unprecedented success and generalization in modeling fairly complex textual inputs in the abstract space, making them powerful tools for zero-shot learning. Such capability is extended to other modalities such as the visual domain using cross-modal foundation models such as CLIP, and as a result, semantically meaningful representation are extractable from visual inputs. In this work, we leverage this capability and propose an approach that can provide semantic insights into a model's patterns of failures and biases. Given a black box model, its training data, and task definition, we first calculate its task-related loss for each data point. We then extract a semantically meaningful representation for each training data point (such as CLIP embeddings from its visual encoder) and train a lightweight diagnosis model which maps this semantically meaningful representation of a data point to its task loss. We show that an ensemble of such lightweight models can be used to generate insights on the performance of the black-box model, in terms of identifying its patterns of failures and biases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2023

Zero-Shot Robustification of Zero-Shot Models With Foundation Models

Zero-shot inference is a powerful paradigm that enables the use of large...
research
12/21/2022

Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners

Through in-context learning (ICL), large-scale language models are effec...
research
06/03/2022

Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning

People say, "A picture is worth a thousand words". Then how can we get t...
research
03/27/2023

Troika: Multi-Path Cross-Modal Traction for Compositional Zero-Shot Learning

Recent compositional zero-shot learning (CZSL) methods adapt pre-trained...
research
02/23/2022

Absolute Zero-Shot Learning

Considering the increasing concerns about data copyright and privacy iss...
research
07/01/2022

(Un)likelihood Training for Interpretable Embedding

Cross-modal representation learning has become a new normal for bridging...
research
03/05/2019

PROPS: Probabilistic personalization of black-box sequence models

We present PROPS, a lightweight transfer learning mechanism for sequenti...

Please sign up or login with your details

Forgot password? Click here to reset