A Base Camp for Scaling AI

12/23/2016
by   C. J. C. Burges, et al.
0

Modern statistical machine learning (SML) methods share a major limitation with the early approaches to AI: there is no scalable way to adapt them to new domains. Human learning solves this in part by leveraging a rich, shared, updateable world model. Such scalability requires modularity: updating part of the world model should not impact unrelated parts. We have argued that such modularity will require both "correctability" (so that errors can be corrected without introducing new errors) and "interpretability" (so that we can understand what components need correcting). To achieve this, one could attempt to adapt state of the art SML systems to be interpretable and correctable; or one could see how far the simplest possible interpretable, correctable learning methods can take us, and try to control the limitations of SML methods by applying them only where needed. Here we focus on the latter approach and we investigate two main ideas: "Teacher Assisted Learning", which leverages crowd sourcing to learn language; and "Factored Dialog Learning", which factors the process of application development into roles where the language competencies needed are isolated, enabling non-experts to quickly create new applications. We test these ideas in an "Automated Personal Assistant" (APA) setting, with two scenarios: that of detecting user intent from a user-APA dialog; and that of creating a class of event reminder applications, where a non-expert "teacher" can then create specific apps. For the intent detection task, we use a dataset of a thousand labeled utterances from user dialogs with Cortana, and we show that our approach matches state of the art SML methods, but in addition provides full transparency: the whole (editable) model can be summarized on one human-readable page. For the reminder app task, we ran small user studies to verify the efficacy of the approach.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/04/2021

The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting User Questions About Human or Non-Human Identity

Humans are increasingly interacting with machines through language, some...
05/11/2018

Bootstrapping Multilingual Intent Models via Machine Translation for Dialog Automation

With the resurgence of chat-based dialog systems in consumer and enterpr...
10/17/2020

Example-Driven Intent Prediction with Observers

A key challenge of dialog systems research is to effectively and efficie...
04/06/2021

A Student-Teacher Architecture for Dialog Domain Adaptation under the Meta-Learning Setting

Numerous new dialog domains are being created every day while collecting...
04/17/2019

Towards Open Intent Discovery for Conversational Text

Detecting and identifying user intent from text, both written and spoken...
04/30/2021

Hierarchical Modeling for Out-of-Scope Domain and Intent Classification

User queries for a real-world dialog system may sometimes fall outside t...
01/11/2021

Revisiting Mahalanobis Distance for Transformer-Based Out-of-Domain Detection

Real-life applications, heavily relying on machine learning, such as dia...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.