Knowledge-Infused Self Attention Transformers

06/23/2023
by   Kaushik Roy, et al.
0

Transformer-based language models have achieved impressive success in various natural language processing tasks due to their ability to capture complex dependencies and contextual information using self-attention mechanisms. However, they are not without limitations. These limitations include hallucinations, where they produce incorrect outputs with high confidence, and alignment issues, where they generate unhelpful and unsafe outputs for human users. These limitations stem from the absence of implicit and missing context in the data alone. To address this, researchers have explored augmenting these models with external knowledge from knowledge graphs to provide the necessary additional context. However, the ad-hoc nature of existing methods makes it difficult to properly analyze the effects of knowledge infusion on the many moving parts or components of a transformer. This paper introduces a systematic method for infusing knowledge into different components of a transformer-based model. A modular framework is proposed to identify specific components within the transformer architecture, such as the self-attention mechanism, encoder layers, or the input embedding layer, where knowledge infusion can be applied. Additionally, extensive experiments are conducted on the General Language Understanding Evaluation (GLUE) benchmark tasks, and the findings are reported. This systematic approach aims to facilitate more principled approaches to incorporating knowledge into language model architectures.

READ FULL TEXT
research
05/08/2023

Knowledge Graph Guided Semantic Evaluation of Language Models For User Trust

A fundamental question in natural language processing is - what kind of ...
research
09/27/2021

Multiplicative Position-aware Transformer Models for Language Understanding

Transformer models, which leverage architectural improvements like self-...
research
02/04/2022

Temporal Attention for Language Models

Pretrained language models based on the transformer architecture have sh...
research
09/20/2022

Relaxed Attention for Transformer Models

The powerful modeling capabilities of all-attention-based transformer ar...
research
12/06/2021

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

Most of today's AI systems focus on using self-attention mechanisms and ...
research
10/09/2022

KSAT: Knowledge-infused Self Attention Transformer – Integrating Multiple Domain-Specific Contexts

Domain-specific language understanding requires integrating multiple pie...
research
06/06/2023

Causal interventions expose implicit situation models for commonsense language understanding

Accounts of human language processing have long appealed to implicit “si...

Please sign up or login with your details

Forgot password? Click here to reset