DeepAI AI Chat
Log In Sign Up

Intelligible Artificial Intelligence

03/09/2018
by   Daniel S. Weld, et al.
0

Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. In order to trust their behavior, we must make it intelligible --- either by using inherently interpretable models or by developing methods for explaining otherwise overwhelmingly complex decisions by local approximation, vocabulary alignment, and interactive dialog.

READ FULL TEXT
06/16/2020

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

Nowadays, deep neural networks are widely used in mission critical syste...
03/09/2021

The AI Index 2021 Annual Report

Welcome to the fourth edition of the AI Index Report. This year we signi...
03/09/2021

When is it permissible for artificial intelligence to lie? A trust-based approach

Conversational Artificial Intelligence (AI) used in industry settings ca...
01/09/2018

EBIC: an artificial intelligence-based parallel biclustering algorithm for pattern discovery

In this paper a novel biclustering algorithm based on artificial intelli...
01/25/2022

ADAPT: An Open-Source sUAS Payload for Real-Time Disaster Prediction and Response with AI

Small unmanned aircraft systems (sUAS) are becoming prominent components...
10/11/2022

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...
10/10/2019

The Quest for Interpretable and Responsible Artificial Intelligence

Artificial Intelligence (AI) provides many opportunities to improve priv...