DeepAI AI Chat
Log In Sign Up

The five Is: Key principles for interpretable and safe conversational AI

by   Mattias Wahde, et al.
Chalmers University of Technology

In this position paper, we present five key principles, namely interpretability, inherent capability to explain, independent data, interactive learning, and inquisitiveness, for the development of conversational AI that, unlike the currently popular black box approaches, is transparent and accountable. At present, there is a growing concern with the use of black box statistical language models: While displaying impressive average performance, such systems are also prone to occasional spectacular failures, for which there is no clear remedy. In an effort to initiate a discussion on possible alternatives, we outline and exemplify how our five principles enable the development of conversational AI systems that are transparent and thus safer for use. We also present some of the challenges inherent in the implementation of those principles.


Critical Empirical Study on Black-box Explanations in AI

This paper provides empirical concerns about post-hoc explanations of bl...

ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...

Assessing the accuracy of the Australian Senate count: Key steps for a rigorous and transparent audit

This paper explains the main principles and some of the technical detail...

Solving the Black Box Problem: A General-Purpose Recipe for Explainable Artificial Intelligence

Many of the computing systems developed using machine learning are opaqu...

Safer Conversational AI as a Source of User Delight

This work explores the impact of moderation on users' enjoyment of conve...