Nearly Zero-Shot Learning for Semantic Decoding in Spoken Dialogue Systems

06/14/2018
by   Lina M. Rojas Barahona, et al.
0

This paper presents two ways of dealing with scarce data in semantic decoding using N-Best speech recognition hypotheses. First, we learn features by using a deep learning architecture in which the weights for the unknown and known categories are jointly optimised. Second, an unsupervised method is used for further tuning the weights. Sharing weights injects prior knowledge to unknown categories. The unsupervised tuning (i.e. the risk minimisation) improves the F-Measure when recognising nearly zero-shot data on the DSTC3 corpus. This unsupervised method can be applied subject to two assumptions: the rank of the class marginal is assumed to be known and the class-conditional scores of the classifier are assumed to follow a Gaussian distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2023

Zero-Knowledge Zero-Shot Learning for Novel Visual Category Discovery

Generalized Zero-Shot Learning (GZSL) and Open-Set Recognition (OSR) are...
research
03/26/2015

Transductive Multi-class and Multi-label Zero-shot Learning

Recently, zero-shot learning (ZSL) has received increasing interest. The...
research
03/25/2019

f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning

When labeled training data is scarce, a promising data augmentation appr...
research
10/07/2020

Learning Clusterable Visual Features for Zero-Shot Recognition

In zero-shot learning (ZSL), conditional generators have been widely use...
research
03/20/2021

Classifier Crafting: Turn Your ConvNet into a Zero-Shot Learner!

In Zero-shot learning (ZSL), we classify unseen categories using textual...
research
02/06/2015

Unsupervised Fusion Weight Learning in Multiple Classifier Systems

In this paper we present an unsupervised method to learn the weights wit...

Please sign up or login with your details

Forgot password? Click here to reset