Log In Sign Up

Bayesian Methods for Semi-supervised Text Annotation

by   Kristian Miok, et al.

Human annotations are an important source of information in the development of natural language understanding approaches. As under the pressure of productivity annotators can assign different labels to a given text, the quality of produced annotations frequently varies. This is especially the case if decisions are difficult, with high cognitive load, requires awareness of broader context, or careful consideration of background knowledge. To alleviate the problem, we propose two semi-supervised methods to guide the annotation process: a Bayesian deep learning model and a Bayesian ensemble method. Using a Bayesian deep learning method, we can discover annotations that cannot be trusted and might require reannotation. A recently proposed Bayesian ensemble method helps us to combine the annotators' labels with predictions of trained models. According to the results obtained from three hate speech detection experiments, the proposed Bayesian methods can improve the annotations and prediction performance of BERT models.


Active Self-Semi-Supervised Learning for Few Labeled Samples Fast Training

Faster training and fewer annotations are two key issues for applying de...

Exploring Semi-Supervised Learning for Predicting Listener Backchannels

Developing human-like conversational agents is a prime area in HCI resea...

Auto-Annotation Quality Prediction for Semi-Supervised Learning with Ensembles

Auto-annotation by ensemble of models is an efficient method of learning...

Life is not black and white – Combining Semi-Supervised Learning with fuzzy labels

The required amount of labeled data is one of the biggest issues in deep...

Pseudo-Label Ensemble-based Semi-supervised Learning for Handling Noisy Soiling Segmentation Annotations

Manual annotation of soiling on surround view cameras is a very challeng...

Active Annotation: bootstrapping annotation lexicon and guidelines for supervised NLU learning

Natural Language Understanding (NLU) models are typically trained in a s...