Intermediate Entity-based Sparse Interpretable Representation Learning

12/03/2022
by   Diego Garcia-Olano, et al.
0

Interpretable entity representations (IERs) are sparse embeddings that are "human-readable" in that dimensions correspond to fine-grained entity types and values are predicted probabilities that a given entity is of the corresponding type. These methods perform well in zero-shot and low supervision settings. Compared to standard dense neural embeddings, such interpretable representations may permit analysis and debugging. However, while fine-tuning sparse, interpretable representations improves accuracy on downstream tasks, it destroys the semantics of the dimensions which were enforced in pre-training. Can we maintain the interpretable semantics afforded by IERs while improving predictive performance on downstream tasks? Toward this end, we propose Intermediate enTity-based Sparse Interpretable Representation Learning (ItsIRL). ItsIRL realizes improved performance over prior IERs on biomedical tasks, while maintaining "interpretability" generally and their ability to support model debugging specifically. The latter is enabled in part by the ability to perform "counterfactual" fine-grained entity type manipulation, which we explore in this work. Finally, we propose a method to construct entity type based class prototypes for revealing global semantic properties of classes learned by our model.

READ FULL TEXT

page 3

page 11

research
06/17/2021

Biomedical Interpretable Entity Representations

Pre-trained language models induce dense entity representations that off...
research
05/24/2023

SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language Representations

Although deep language representations have become the dominant form of ...
research
05/29/2018

Lightly-supervised Representation Learning with Global Interpretability

We propose a lightly-supervised approach for information extraction, in ...
research
04/30/2020

Interpretable Entity Representations through Large-Scale Typing

In standard methodology for natural language processing, entities in tex...
research
12/10/2017

Inducing Interpretability in Knowledge Graph Embeddings

We study the problem of inducing interpretability in KG embeddings. Spec...
research
10/06/2022

Generative Entity Typing with Curriculum Learning

Entity typing aims to assign types to the entity mentions in given texts...
research
10/11/2019

Finding Interpretable Concept Spaces in Node Embeddings using Knowledge Bases

In this paper we propose and study the novel problem of explaining node ...

Please sign up or login with your details

Forgot password? Click here to reset