Active Learning amidst Logical Knowledge

by   Emmanouil Antonios Platanios, et al.

Structured prediction is ubiquitous in applications of machine learning such as knowledge extraction and natural language processing. Structure often can be formulated in terms of logical constraints. We consider the question of how to perform efficient active learning in the presence of logical constraints among variables inferred by different classifiers. We propose several methods and provide theoretical results that demonstrate the inappropriateness of employing uncertainty guided sampling, a commonly used active learning method. Furthermore, experiments on ten different datasets demonstrate that the methods significantly outperform alternatives in practice. The results are of practical significance in situations where labeled data is scarce.


Active Learning Under Malicious Mislabeling and Poisoning Attacks

Deep neural networks usually require large labeled datasets for training...

Pre-trained Language Model Based Active Learning for Sentence Matching

Active learning is able to significantly reduce the annotation cost for ...

Active learning algorithm through the lens of rejection arguments

Active learning is a paradigm of machine learning which aims at reducing...

Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study

Several recent papers investigate Active Learning (AL) for mitigating th...

On the Relationship between Data Efficiency and Error for Uncertainty Sampling

While active learning offers potential cost savings, the actual data eff...

Overcoming Practical Issues of Deep Active Learning and its Applications on Named Entity Recognition

Existing deep active learning algorithms achieve impressive sampling eff...

Towards Active Learning Based Smart Assistant for Manufacturing

A general approach for building a smart assistant that guides a user fro...