Log In Sign Up

SURF: Improving classifiers in production by learning from busy and noisy end users

by   Joshua Lockhart, et al.

Supervised learning classifiers inevitably make mistakes in production, perhaps mis-labeling an email, or flagging an otherwise routine transaction as fraudulent. It is vital that the end users of such a system are provided with a means of relabeling data points that they deem to have been mislabeled. The classifier can then be retrained on the relabeled data points in the hope of performance improvement. To reduce noise in this feedback data, well known algorithms from the crowdsourcing literature can be employed. However, the feedback setting provides a new challenge: how do we know what to do in the case of user non-response? If a user provides us with no feedback on a label then it can be dangerous to assume they implicitly agree: a user can be busy, lazy, or no longer a user of the system! We show that conventional crowdsourcing algorithms struggle in this user feedback setting, and present a new algorithm, SURF, that can cope with this non-response ambiguity.


page 1

page 2

page 3

page 4


Learning Improvised Chatbots from Adversarial Modifications of Natural Language Feedback

The ubiquitous nature of chatbots and their interaction with users gener...

Interscript: A dataset for interactive learning of scripts through error feedback

How can an end-user provide feedback if a deployed structured prediction...

CNT (Conditioning on Noisy Targets): A new Algorithm for Leveraging Top-Down Feedback

We propose a novel regularizer for supervised learning called Conditioni...

PlanIt: A Crowdsourcing Approach for Learning to Plan Paths from Large Scale Preference Feedback

We consider the problem of learning user preferences over robot trajecto...

Improving scripts with a memory of natural feedback

How can an end-user provide feedback if a deployed structured prediction...

Combining Data-driven Supervision with Human-in-the-loop Feedback for Entity Resolution

The distribution gap between training datasets and data encountered in p...