TagRuler: Interactive Tool for Span-Level Data Programming by Demonstration

by   Dongjin Choi, et al.

Despite rapid developments in the field of machine learning research, collecting high-quality labels for supervised learning remains a bottleneck for many applications. This difficulty is exacerbated by the fact that state-of-the-art models for NLP tasks are becoming deeper and more complex, often increasing the amount of training data required even for fine-tuning. Weak supervision methods, including data programming, address this problem and reduce the cost of label collection by using noisy label sources for supervision. However, until recently, data programming was only accessible to users who knew how to program. To bridge this gap, the Data Programming by Demonstration framework was proposed to facilitate the automatic creation of labeling functions based on a few examples labeled by a domain expert. This framework has proven successful for generating high-accuracy labeling models for document classification. In this work, we extend the DPBD framework to span-level annotation tasks, arguably one of the most time-consuming NLP labeling tasks. We built a novel tool, TagRuler, that makes it easy for annotators to build span-level labeling functions without programming and encourages them to explore trade-offs between different labeling models and active learning strategies. We empirically demonstrated that an annotator could achieve a higher F1 score using the proposed tool compared to manual labeling for different span-level annotation tasks.


page 1

page 2

page 3

page 4


Data Programming by Demonstration: A Framework for Interactively Learning Labeling Functions

Data programming is a programmatic weak supervision approach to efficien...

Adversarial Data Programming: Using GANs to Relax the Bottleneck of Curated Labeled Data

Paucity of large curated hand-labeled training data for every domain-of-...

Automatic Synthesis of Diverse Weak Supervision Sources for Behavior Analysis

Obtaining annotations for large training sets is expensive, especially i...

Improving Classification through Weak Supervision in Context-specific Conversational Agent Development for Teacher Education

Machine learning techniques applied to the Natural Language Processing (...

Data Programming: Creating Large Training Sets, Quickly

Large labeled training sets are the critical building blocks of supervis...

Dependency Structure Misspecification in Multi-Source Weak Supervision Models

Data programming (DP) has proven to be an attractive alternative to cost...

Generative Adversarial Data Programming

The paucity of large curated hand-labeled training data forms a major bo...

Please sign up or login with your details

Forgot password? Click here to reset