Triple-to-Text: Converting RDF Triples into High-Quality Natural Languages via Optimizing an Inverse KL Divergence

05/25/2019
by   Yaoming Zhu, et al.
0

Knowledge base is one of the main forms to represent information in a structured way. A knowledge base typically consists of Resource Description Frameworks (RDF) triples which describe the entities and their relations. Generating natural language description of the knowledge base is an important task in NLP, which has been formulated as a conditional language generation task and tackled using the sequence-to-sequence framework. Current works mostly train the language models by maximum likelihood estimation, which tends to generate lousy sentences. In this paper, we argue that such a problem of maximum likelihood estimation is intrinsic, which is generally irrevocable via changing network structures. Accordingly, we propose a novel Triple-to-Text (T2T) framework, which approximately optimizes the inverse Kullback-Leibler (KL) divergence between the distributions of the real and generated sentences. Due to the nature that inverse KL imposes large penalty on fake-looking samples, the proposed method can significantly reduce the probability of generating low-quality sentences. Our experiments on three real-world datasets demonstrate that T2T can generate higher-quality sentences and outperform baseline models in several evaluation metrics.

READ FULL TEXT
research
04/10/2019

Generating Animations from Screenplays

Automatically generating animation from natural language text finds appl...
research
05/02/2020

GenericsKB: A Knowledge Base of Generic Statements

We present a new resource for the NLP community, namely a large (3.5M+ s...
research
05/23/2023

Complementing GPT-3 with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata

As the largest knowledge base, Wikidata is a massive source of knowledge...
research
03/29/2019

Towards Knowledge-Based Personalized Product Description Generation in E-commerce

Quality product descriptions are critical for providing competitive cust...
research
03/01/2019

Open Information Extraction from Question-Answer Pairs

Open Information Extraction (OpenIE) extracts meaningful structured tupl...
research
05/19/2022

Why GANs are overkill for NLP

This work offers a novel theoretical perspective on why, despite numerou...

Please sign up or login with your details

Forgot password? Click here to reset