Saliency maps can explain a neural model's prediction by identifying
imp...
In the language domain, as in other domains, neural explainability takes...
Definition Extraction systems are a valuable knowledge source for both h...
Amid a discussion about Green AI in which we see explainability neglecte...
Integrated Gradients (IG) and PatternAttribution (PA) are two establishe...
Pre-trained transformer language models (TLMs) have recently refashioned...
We explore to what extent knowledge about the pre-trained language model...
Representations in the hidden layers of Deep Neural Networks (DNN) are o...
Distributed word vector spaces are considered hard to interpret which hi...
Evaluating translation models is a trade-off between effort and detail. ...
PatternAttribution is a recent method, introduced in the vision domain, ...
In this article, we present a graph-based method using a cubic template ...