DeepAI AI Chat
Log In Sign Up

Intelligible Artificial Intelligence

by   Daniel S. Weld, et al.

Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. In order to trust their behavior, we must make it intelligible --- either by using inherently interpretable models or by developing methods for explaining otherwise overwhelmingly complex decisions by local approximation, vocabulary alignment, and interactive dialog.


Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

Nowadays, deep neural networks are widely used in mission critical syste...

The AI Index 2021 Annual Report

Welcome to the fourth edition of the AI Index Report. This year we signi...

When is it permissible for artificial intelligence to lie? A trust-based approach

Conversational Artificial Intelligence (AI) used in industry settings ca...

EBIC: an artificial intelligence-based parallel biclustering algorithm for pattern discovery

In this paper a novel biclustering algorithm based on artificial intelli...

ADAPT: An Open-Source sUAS Payload for Real-Time Disaster Prediction and Response with AI

Small unmanned aircraft systems (sUAS) are becoming prominent components...

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...

The Quest for Interpretable and Responsible Artificial Intelligence

Artificial Intelligence (AI) provides many opportunities to improve priv...