IERL: Interpretable Ensemble Representation Learning – Combining CrowdSourced Knowledge and Distributed Semantic Representations

06/24/2023
by   Yuxin Zi, et al.
0

Large Language Models (LLMs) encode meanings of words in the form of distributed semantics. Distributed semantics capture common statistical patterns among language tokens (words, phrases, and sentences) from large amounts of data. LLMs perform exceedingly well across General Language Understanding Evaluation (GLUE) tasks designed to test a model's understanding of the meanings of the input tokens. However, recent studies have shown that LLMs tend to generate unintended, inconsistent, or wrong texts as outputs when processing inputs that were seen rarely during training, or inputs that are associated with diverse contexts (e.g., well-known hallucination phenomenon in language generation tasks). Crowdsourced and expert-curated knowledge graphs such as ConceptNet are designed to capture the meaning of words from a compact set of well-defined contexts. Thus LLMs may benefit from leveraging such knowledge contexts to reduce inconsistencies in outputs. We propose a novel ensemble learning method, Interpretable Ensemble Representation Learning (IERL), that systematically combines LLM and crowdsourced knowledge representations of input tokens. IERL has the distinct advantage of being interpretable by design (when was the LLM context used vs. when was the knowledge context used?) over state-of-the-art (SOTA) methods, allowing scrutiny of the inputs in conjunction with the parameters of the model, facilitating the analysis of models' inconsistent or irrelevant outputs. Although IERL is agnostic to the choice of LLM and crowdsourced knowledge, we demonstrate our approach using BERT and ConceptNet. We report improved or competitive results with IERL across GLUE tasks over current SOTA methods and significantly enhanced model interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2023

Knowledge Graph Guided Semantic Evaluation of Language Models For User Trust

A fundamental question in natural language processing is - what kind of ...
research
10/08/2022

KALM: Knowledge-Aware Integration of Local, Document, and Global Contexts for Long Document Understanding

With the advent of pre-trained language models (LMs), increasing researc...
research
08/15/2021

Accurate, yet inconsistent? Consistency Analysis on Language Understanding Models

Consistency, which refers to the capability of generating the same predi...
research
05/19/2023

Constructing Word-Context-Coupled Space Aligned with Associative Knowledge Relations for Interpretable Language Modeling

As the foundation of current natural language processing methods, pre-tr...
research
02/24/2023

Language-Driven Representation Learning for Robotics

Recent work in visual representation learning for robotics demonstrates ...
research
01/02/2021

Superbizarre Is Not Superb: Improving BERT's Interpretations of Complex Words with Derivational Morphology

How does the input segmentation of pretrained language models (PLMs) aff...
research
11/05/2022

Small Language Models for Tabular Data

Supervised deep learning is most commonly applied to difficult problems ...

Please sign up or login with your details

Forgot password? Click here to reset