Semantic Supervision: Enabling Generalization over Output Spaces

02/26/2022
by   Austin W. Hanjie, et al.
0

In this paper, we propose Semantic Supervision (SemSup) - a unified paradigm for training classifiers that generalize over output spaces. In contrast to standard classification, which treats classes as discrete symbols, SemSup represents them as dense vector features obtained from descriptions of classes (e.g., "The cat is a small carnivorous mammal"). This allows the output space to be unbounded (in the space of descriptions) and enables models to generalize both over unseen inputs and unseen outputs (e.g. "The aardvark is a nocturnal burrowing mammal with long ears"). Specifically, SemSup enables four types of generalization, to – (1) unseen class descriptions, (2) unseen classes, (3) unseen super-classes, and (4) unseen tasks. Through experiments on four classification datasets across two variants (multi-class and multi-label), two input modalities (text and images), and two output description modalities (text and JSON), we show that our SemSup models significantly outperform standard supervised models and existing models that leverage word embeddings over class names. For instance, our model outperforms baselines by 40 points on unseen descriptions and classes, respectively, on a news categorization dataset (RCV1). SemSup can serve as a pathway for scaling neural models to large unbounded output spaces and enabling better generalization and model reuse for unseen tasks and domains.

READ FULL TEXT
research
07/15/2021

Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen Domains

Recent progress towards designing models that can generalize to unseen d...
research
06/16/2018

Joint Input-Label Embedding for Neural Text Classification

Neural text classification methods typically treat output classes as cat...
research
12/08/2022

Learning Domain Invariant Prompt for Vision-Language Models

Prompt learning is one of the most effective and trending ways to adapt ...
research
08/05/2019

Learning to Generalize to Unseen Tasks with Bilevel Optimization

Recent metric-based meta-learning approaches, which learn a metric space...
research
06/08/2018

Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations

Many structured prediction problems (particularly in vision and language...
research
01/29/2019

Hyperspherical Prototype Networks

This paper introduces hyperspherical prototype networks, which unify reg...
research
10/23/2022

Functional Indirection Neural Estimator for Better Out-of-distribution Generalization

The capacity to achieve out-of-distribution (OOD) generalization is a ha...

Please sign up or login with your details

Forgot password? Click here to reset