Intelligible Artificial Intelligence

03/09/2018
by   Daniel S. Weld, et al.
0

Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. In order to trust their behavior, we must make it intelligible --- either by using inherently interpretable models or by developing methods for explaining otherwise overwhelmingly complex decisions by local approximation, vocabulary alignment, and interactive dialog.

READ FULL TEXT
research
06/16/2020

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

Nowadays, deep neural networks are widely used in mission critical syste...
research
05/01/2023

Explanation through Reward Model Reconciliation using POMDP Tree Search

As artificial intelligence (AI) algorithms are increasingly used in miss...
research
06/12/2023

Explaining CLIP through Co-Creative Drawings and Interaction

This paper analyses a visual archive of drawings produced by an interact...
research
03/09/2021

When is it permissible for artificial intelligence to lie? A trust-based approach

Conversational Artificial Intelligence (AI) used in industry settings ca...
research
10/11/2022

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...
research
01/09/2018

EBIC: an artificial intelligence-based parallel biclustering algorithm for pattern discovery

In this paper a novel biclustering algorithm based on artificial intelli...
research
01/25/2022

ADAPT: An Open-Source sUAS Payload for Real-Time Disaster Prediction and Response with AI

Small unmanned aircraft systems (sUAS) are becoming prominent components...

Please sign up or login with your details

Forgot password? Click here to reset