Constructing Word-Context-Coupled Space Aligned with Associative Knowledge Relations for Interpretable Language Modeling

05/19/2023
by   Fanyu Wang, et al.
0

As the foundation of current natural language processing methods, pre-trained language model has achieved excellent performance. However, the black-box structure of the deep neural network in pre-trained language models seriously limits the interpretability of the language modeling process. After revisiting the coupled requirement of deep neural representation and semantics logic of language modeling, a Word-Context-Coupled Space (W2CSpace) is proposed by introducing the alignment processing between uninterpretable neural representation and interpretable statistical logic. Moreover, a clustering process is also designed to connect the word- and context-level semantics. Specifically, an associative knowledge network (AKN), considered interpretable statistical logic, is introduced in the alignment process for word-level semantics. Furthermore, the context-relative distance is employed as the semantic feature for the downstream classifier, which is greatly different from the current uninterpretable semantic representations of pre-trained models. Our experiments for performance evaluation and interpretable analysis are executed on several types of datasets, including SIGHAN, Weibo, and ChnSenti. Wherein a novel evaluation strategy for the interpretability of machine learning models is first proposed. According to the experimental results, our language model can achieve better performance and highly credible interpretable ability compared to related state-of-the-art methods.

READ FULL TEXT
research
04/14/2021

Distributed Word Representation in Tsetlin Machine

Tsetlin Machine (TM) is an interpretable pattern recognition algorithm b...
research
10/29/2018

Language Modeling with Sparse Product of Sememe Experts

Most language modeling methods rely on large-scale data to statistically...
research
09/09/2019

Reverse Transfer Learning: Can Word Embeddings Trained for Different NLP Tasks Improve Neural Language Models?

Natural language processing (NLP) tasks tend to suffer from a paucity of...
research
05/30/2023

Empirical Sufficiency Lower Bounds for Language Modeling with Locally-Bootstrapped Semantic Structures

In this work we build upon negative results from an attempt at language ...
research
06/24/2023

IERL: Interpretable Ensemble Representation Learning – Combining CrowdSourced Knowledge and Distributed Semantic Representations

Large Language Models (LLMs) encode meanings of words in the form of dis...
research
05/24/2023

SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language Representations

Although deep language representations have become the dominant form of ...
research
09/29/2018

Knowledge-guided Semantic Computing Network

It is very useful to integrate human knowledge and experience into tradi...

Please sign up or login with your details

Forgot password? Click here to reset