
-
Contrastive Explanations for Model Interpretability
Contrastive explanations clarify why an event occurred in contrast to an...
read it
-
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Trust is a central component of the interaction between people and AI, i...
read it
-
Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data
The process of collecting and annotating training data may introduce dis...
read it
-
Aligning Faithful Interpretations with their Social Attribution
We find that the requirement of model interpretations to be faithful is ...
read it
-
When Bert Forgets How To POS: Amnesic Probing of Linguistic Properties and MLM Predictions
A growing body of work makes use of probing in order to investigate the ...
read it
-
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
With the growing popularity of deep-learning based NLP models, comes a n...
read it
-
Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning
We consider the situation in which a user has collected a small set of d...
read it
-
Neural network gradient-based learning of black-box function interfaces
Deep neural networks work well at approximating complicated functions wh...
read it
-
Understanding Convolutional Neural Networks for Text Classification
We present an analysis into the inner workings of Convolutional Neural N...
read it
-
Estimate and Replace: A Novel Approach to Integrating Deep Neural Networks with Existing Applications
Existing applications include a huge amount of knowledge that is out of ...
read it