DeepAI
Log In Sign Up

Ranking Creative Language Characteristics in Small Data Scenarios

10/23/2020
by   Julia Siekiera, et al.
0

The ability to rank creative natural language provides an important general tool for downstream language understanding and generation. However, current deep ranking models require substantial amounts of labeled data that are difficult and expensive to obtain for different domains, languages and creative characteristics. A recent neural approach, the DirectRanker, promises to reduce the amount of training data needed but its application to text isn't fully explored. We therefore adapt the DirectRanker to provide a new deep model for ranking creative language with small data. We compare DirectRanker with a Bayesian approach, Gaussian process preference learning (GPPL), which has previously been shown to work well with sparse data. Our experiments with sparse training data show that while the performance of standard neural ranking approaches collapses with small training datasets, DirectRanker remains effective. We find that combining DirectRanker with GPPL increases performance across different settings by leveraging the complementary benefits of both models. Our combined approach outperforms the previous state-of-the-art on humor and metaphor novelty tasks, increasing Spearman's ρ by 14 on average.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/01/2016

Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding

The goal of this paper is to use multi-task learning to efficiently scal...
09/14/2016

Transliteration in Any Language with Surrogate Languages

We introduce a method for transliteration generation that can produce tr...
11/04/2021

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

Most recent progress in natural language understanding (NLU) has been dr...
06/06/2018

Finding Convincing Arguments Using Scalable Bayesian Preference Learning

We introduce a scalable Bayesian preference learning method for identify...
02/06/2021

Jointly Improving Language Understanding and Generation with Quality-Weighted Weak Supervision of Automatic Labeling

Neural natural language generation (NLG) and understanding (NLU) models ...
03/31/2022

Domain Adaptation for Sparse-Data Settings: What Do We Gain by Not Using Bert?

The practical success of much of NLP depends on the availability of trai...
10/05/2019

Content-Based Features to Rank Influential Hidden Services of the Tor Darknet

The unevenness importance of criminal activities in the onion domains of...