Performance Variability in Zero-Shot Classification

03/01/2021
by   Matías Molina, et al.
0

Zero-shot classification (ZSC) is the task of learning predictors for classes not seen during training. Although the different methods in the literature are evaluated using the same class splits, little is known about their stability under different class partitions. In this work we show experimentally that ZSC performance exhibits strong variability under changing training setups. We propose the use ensemble learning as an attempt to mitigate this phenomena.

READ FULL TEXT

page 1

page 2

page 3

research
05/19/2023

SelfzCoT: a Self-Prompt Zero-shot CoT from Semantic-level to Code-level for a Better Utilization of LLMs

This paper show a work on better use of LLMs with SelfzCoT a self-prompt...
research
05/12/2022

Multi Task Learning For Zero Shot Performance Prediction of Multilingual Models

Massively Multilingual Transformer based Language Models have been obser...
research
01/27/2023

Projected Subnetworks Scale Adaptation

Large models support great zero-shot and few-shot capabilities. However,...
research
08/23/2017

Generating Visual Representations for Zero-Shot Classification

This paper addresses the task of learning an image clas-sifier when some...
research
05/24/2023

A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification

In recent years, large language models (LLMs) have achieved strong perfo...
research
11/25/2021

Near-Zero-Shot Suggestion Mining with a Little Help from WordNet

In this work, we explore the constructive side of online reviews: advice...
research
07/27/2020

Practical and sample efficient zero-shot HPO

Zero-shot hyperparameter optimization (HPO) is a simple yet effective us...

Please sign up or login with your details

Forgot password? Click here to reset