On the Semantic Interpretability of Artificial Intelligence Models

07/09/2019
by   Vivian S. Silva, et al.
0

Artificial Intelligence models are becoming increasingly more powerful and accurate, supporting or even replacing humans' decision making. But with increased power and accuracy also comes higher complexity, making it hard for users to understand how the model works and what the reasons behind its predictions are. Humans must explain and justify their decisions, and so do the AI models supporting them in this process, making semantic interpretability an emerging field of study. In this work, we look at interpretability from a broader point of view, going beyond the machine learning scope and covering different AI fields such as distributional semantics and fuzzy logic, among others. We examine and classify the models according to their nature and also based on how they introduce interpretability features, analyzing how each approach affects the final users and pointing to gaps that still need to be addressed to provide more human-centered interpretability solutions.

READ FULL TEXT
research
01/31/2019

Human-Centered Artificial Intelligence and Machine Learning

Humans are increasingly coming into contact with artificial intelligence...
research
07/02/2023

Minimum Levels of Interpretability for Artificial Moral Agents

As artificial intelligence (AI) models continue to scale up, they are be...
research
07/17/2019

A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI

Recently, artificial intelligence, especially machine learning has demon...
research
09/29/2022

OAK4XAI: Model towards Out-Of-Box eXplainable Artificial Intelligence for Digital Agriculture

Recent machine learning approaches have been effective in Artificial Int...
research
05/15/2022

Textual Explanations and Critiques in Recommendation Systems

Artificial intelligence and machine learning algorithms have become ubiq...
research
04/15/2020

Human Evaluation of Interpretability: The Case of AI-Generated Music Knowledge

Interpretability of machine learning models has gained more and more att...
research
03/24/2021

Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases

Interpretability or explainability is an emerging research field in NLP....

Please sign up or login with your details

Forgot password? Click here to reset