DeepAI AI Chat
Log In Sign Up

The five Is: Key principles for interpretable and safe conversational AI

08/31/2021
by   Mattias Wahde, et al.
Chalmers University of Technology
0

In this position paper, we present five key principles, namely interpretability, inherent capability to explain, independent data, interactive learning, and inquisitiveness, for the development of conversational AI that, unlike the currently popular black box approaches, is transparent and accountable. At present, there is a growing concern with the use of black box statistical language models: While displaying impressive average performance, such systems are also prone to occasional spectacular failures, for which there is no clear remedy. In an effort to initiate a discussion on possible alternatives, we outline and exemplify how our five principles enable the development of conversational AI systems that are transparent and thus safer for use. We also present some of the challenges inherent in the implementation of those principles.

READ FULL TEXT
09/29/2021

Critical Empirical Study on Black-box Explanations in AI

This paper provides empirical concerns about post-hoc explanations of bl...
11/24/2022

ML Interpretability: Simple Isn't Easy

The interpretability of ML models is important, but it is not clear what...
05/29/2022

Assessing the accuracy of the Australian Senate count: Key steps for a rigorous and transparent audit

This paper explains the main principles and some of the technical detail...
03/03/2019

Solving the Black Box Problem: A General-Purpose Recipe for Explainable Artificial Intelligence

Many of the computing systems developed using machine learning are opaqu...
04/18/2023

Safer Conversational AI as a Source of User Delight

This work explores the impact of moderation on users' enjoyment of conve...