What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play

10/23/2018
by   Shi Feng, et al.
0

Machine learning is an important tool for decision making, but its ethical and responsible application requires rigorous vetting of its interpretability and utility: an understudied problem, particularly for natural language processing models. We design a task-specific evaluation for a question answering task and evaluate how well a model interpretation improves human performance in a human-machine cooperative setting. We evaluate interpretation methods in a grounded, realistic setting: playing a trivia game as a team. We also provide design guidance for natural language processing human-in-the-loop settings.

READ FULL TEXT
research
04/20/2021

Problems and Countermeasures in Natural Language Processing Evaluation

Evaluation in natural language processing guides and promotes research o...
research
05/18/2017

Learning Convolutional Text Representations for Visual Question Answering

Visual question answering is a recently proposed artificial intelligence...
research
03/20/2021

Local Interpretations for Explainable Natural Language Processing: A Survey

As the use of deep learning techniques has grown across various fields o...
research
06/23/2022

A Review of Published Machine Learning Natural Language Processing Applications for Protocolling Radiology Imaging

Machine learning (ML) is a subfield of Artificial intelligence (AI), and...
research
07/18/2019

SentiMATE: Learning to play Chess through Natural Language Processing

We present SentiMATE, a novel end-to-end Deep Learning model for Chess, ...
research
09/16/2021

Humanly Certifying Superhuman Classifiers

Estimating the performance of a machine learning system is a longstandin...
research
05/23/2019

On modelling the emergence of logical thinking

Recent progress in machine learning techniques have revived interest in ...

Please sign up or login with your details

Forgot password? Click here to reset