Explaining Relation Classification Models with Semantic Extents

by   Lars Klöser, et al.

In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions. We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.


page 1

page 2

page 3

page 4


Explaining Hate Speech Classification with Model Agnostic Methods

There have been remarkable breakthroughs in Machine Learning and Artific...

Semantic Relation Classification: Task Formalisation and Refinement

The identification of semantic relations between terms within texts is a...

Looking deeper into interpretable deep learning in neuroimaging: a comprehensive survey

Deep learning (DL) models have been popular due to their ability to lear...

On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification

BERT, as one of the pretrianed language models, attracts the most attent...

Simple is Better and Large is Not Enough: Towards Ensembling of Foundational Language Models

Foundational Language Models (FLMs) have advanced natural language proce...

Please sign up or login with your details

Forgot password? Click here to reset