Generative Models for Learning from Crowds

06/13/2017
by   Chi Hong, et al.
0

In this paper, we propose generative probabilistic models for label aggregation. We use Gibbs sampling and a novel variational inference algorithm to perform the posterior inference. Empirical results show that our methods consistently outperform state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2019

Amortized Inference of Variational Bounds for Learning Noisy-OR

Classical approaches for approximate inference depend on cleverly design...
research
11/04/2019

Amortized Population Gibbs Samplers with Neural Sufficient Statistics

We develop amortized population Gibbs (APG) samplers, a new class of aut...
research
05/25/2018

Variational Measure Preserving Flows

Probabilistic modelling is a general and elegant framework to capture th...
research
11/21/2017

Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data

The paper introduces the Hidden Tree Markov Network (HTN), a neuro-proba...
research
07/24/2021

Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose?

We contribute to micro-data model-based reinforcement learning (MBRL) by...
research
12/01/2017

Faithful Model Inversion Substantially Improves Auto-encoding Variational Inference

In learning deep generative models, the encoder for variational inferenc...
research
04/21/2023

Plug-and-Play split Gibbs sampler: embedding deep generative priors in Bayesian inference

This paper introduces a stochastic plug-and-play (PnP) sampling algorith...

Please sign up or login with your details

Forgot password? Click here to reset