Extracting Multi-valued Relations from Language Models

07/06/2023
by   Sneha Singhania, et al.
0

The widespread usage of latent language representations via pre-trained language models (LMs) suggests that they are a promising source of structured knowledge. However, existing methods focus only on a single object per subject-relation pair, even though often multiple objects are correct. To overcome this limitation, we analyze these representations for their potential to yield materialized multi-object relational knowledge. We formulate the problem as a rank-then-select task. For ranking candidate objects, we evaluate existing prompting techniques and propose new ones incorporating domain knowledge. Among the selection methods, we find that choosing objects with a likelihood above a learned relation-specific threshold gives a 49.5 Our results highlight the difficulty of employing LMs for the multi-valued slot-filling task and pave the way for further research on extracting relational knowledge from latent language representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2023

Linearity of Relation Decoding in Transformer Language Models

Much of the knowledge encoded in transformer language models (LMs) may b...
research
10/06/2021

Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations

Learning good representations on multi-relational graphs is essential to...
research
08/26/2022

Task-specific Pre-training and Prompt Decomposition for Knowledge Graph Population with Language Models

We present a system for knowledge graph population with Language Models,...
research
04/25/2022

Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking

Passage re-ranking is to obtain a permutation over the candidate passage...
research
11/28/2019

Inducing Relational Knowledge from BERT

One of the most remarkable properties of word embeddings is the fact tha...
research
09/15/2023

Using Large Language Models for Knowledge Engineering (LLMKE): A Case Study on Wikidata

In this work, we explore the use of Large Language Models (LLMs) for kno...
research
05/24/2023

Injecting Knowledge into Biomedical Pre-trained Models via Polymorphism and Synonymous Substitution

Pre-trained language models (PLMs) were considered to be able to store r...

Please sign up or login with your details

Forgot password? Click here to reset