Textual Explanations and Critiques in Recommendation Systems

05/15/2022
by   Diego Antognini, et al.
0

Artificial intelligence and machine learning algorithms have become ubiquitous. Although they offer a wide range of benefits, their adoption in decision-critical fields is limited by their lack of interpretability, particularly with textual data. Moreover, with more data available than ever before, it has become increasingly important to explain automated predictions. Generally, users find it difficult to understand the underlying computational processes and interact with the models, especially when the models fail to generate the outcomes or explanations, or both, correctly. This problem highlights the growing need for users to better understand the models' inner workings and gain control over their actions. This dissertation focuses on two fundamental challenges of addressing this need. The first involves explanation generation: inferring high-quality explanations from text documents in a scalable and data-driven manner. The second challenge consists in making explanations actionable, and we refer to it as critiquing. This dissertation examines two important applications in natural language processing and recommendation tasks. Overall, we demonstrate that interpretability does not come at the cost of reduced performance in two consequential applications. Our framework is applicable to other fields as well. This dissertation presents an effective means of closing the gap between promise and practice in artificial intelligence.

READ FULL TEXT
research
01/16/2018

A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning

In order for people to be able to trust and take advantage of the result...
research
07/17/2019

A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI

Recently, artificial intelligence, especially machine learning has demon...
research
06/27/2022

"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI

There is broad agreement that Artificial Intelligence (AI) systems, part...
research
07/09/2019

On the Semantic Interpretability of Artificial Intelligence Models

Artificial Intelligence models are becoming increasingly more powerful a...
research
12/14/2016

Interpretable Semantic Textual Similarity: Finding and explaining differences between sentences

User acceptance of artificial intelligence agents might depend on their ...
research
03/02/2023

Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations

Explainable artificial intelligence techniques are evolving at breakneck...
research
10/20/2020

Interpreting convolutional networks trained on textual data

There have been many advances in the artificial intelligence field due t...

Please sign up or login with your details

Forgot password? Click here to reset