Knowledge Graph Guided Semantic Evaluation of Language Models For User Trust

05/08/2023
by   Kaushik Roy, et al.
0

A fundamental question in natural language processing is - what kind of language structure and semantics is the language model capturing? Graph formats such as knowledge graphs are easy to evaluate as they explicitly express language semantics and structure. This study evaluates the semantics encoded in the self-attention transformers by leveraging explicit knowledge graph structures. We propose novel metrics to measure the reconstruction error when providing graph path sequences from a knowledge graph and trying to reproduce/reconstruct the same from the outputs of the self-attention transformer models. The opacity of language models has an immense bearing on societal issues of trust and explainable decision outcomes. Our findings suggest that language models are models of stochastic control processes for plausible language pattern generation. However, they do not ascribe object and concept-level meaning and semantics to the learned stochastic patterns such as those described in knowledge graphs. Furthermore, to enable robust evaluation of concept understanding by language models, we construct and make public an augmented language understanding benchmark built on the General Language Understanding Evaluation (GLUE) benchmark. This has significant application-level user trust implications as stochastic patterns without a strong sense of meaning cannot be trusted in high-stakes applications.

READ FULL TEXT
research
06/18/2022

Can Language Models Capture Graph Semantics? From Graphs to Language Model and Vice-Versa

Knowledge Graphs are a great resource to capture semantic knowledge in t...
research
06/23/2023

Knowledge-Infused Self Attention Transformers

Transformer-based language models have achieved impressive success in va...
research
05/27/2021

Inspecting the concept knowledge graph encoded by modern language models

The field of natural language understanding has experienced exponential ...
research
06/24/2023

IERL: Interpretable Ensemble Representation Learning – Combining CrowdSourced Knowledge and Distributed Semantic Representations

Large Language Models (LLMs) encode meanings of words in the form of dis...
research
09/19/2022

Joint Language Semantic and Structure Embedding for Knowledge Graph Completion

The task of completing knowledge triplets has broad downstream applicati...
research
09/08/2022

Towards explainable evaluation of language models on the semantic similarity of visual concepts

Recent breakthroughs in NLP research, such as the advent of Transformer ...
research
09/15/2021

Matching with Transformers in MELT

One of the strongest signals for automated matching of ontologies and kn...

Please sign up or login with your details

Forgot password? Click here to reset