Causality-aware Concept Extraction based on Knowledge-guided Prompting

05/03/2023
by   Siyu Yuan, et al.
0

Concepts benefit natural language understanding but are far from complete in existing knowledge graphs (KGs). Recently, pre-trained language models (PLMs) have been widely used in text-based concept extraction (CE). However, PLMs tend to mine the co-occurrence associations from massive corpus as pre-trained knowledge rather than the real causal effect between tokens. As a result, the pre-trained knowledge confounds PLMs to extract biased concepts based on spurious co-occurrence correlations, inevitably resulting in low precision. In this paper, through the lens of a Structural Causal Model (SCM), we propose equipping the PLM-based extractor with a knowledge-guided prompt as an intervention to alleviate concept bias. The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts. Our extensive experiments on representative multilingual KG datasets justify that our proposed prompt can effectively alleviate concept bias and improve the performance of PLM-based CE models.The code has been released on https://github.com/siyuyuan/KPCE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2022

Large-scale Multi-granular Concept Extraction Based on Machine Reading Comprehension

The concepts in knowledge graphs (KGs) enable machines to understand nat...
research
05/24/2023

Injecting Knowledge into Biomedical Pre-trained Models via Polymorphism and Synonymous Substitution

Pre-trained language models (PLMs) were considered to be able to store r...
research
12/09/2020

Fusing Context Into Knowledge Graph for Commonsense Reasoning

Commonsense reasoning requires a model to make presumptions about world ...
research
05/24/2023

A Causal View of Entity Bias in (Large) Language Models

Entity bias widely affects pretrained (large) language models, causing t...
research
09/19/2022

Joint Language Semantic and Structure Embedding for Knowledge Graph Completion

The task of completing knowledge triplets has broad downstream applicati...
research
08/07/2023

A Cross-Domain Evaluation of Approaches for Causal Knowledge Extraction

Causal knowledge extraction is the task of extracting relevant causes an...
research
03/05/2021

Causal Attention for Vision-Language Tasks

We present a novel attention mechanism: Causal Attention (CATT), to remo...

Please sign up or login with your details

Forgot password? Click here to reset