Learning from Indirect Observations

10/10/2019
by   Yivan Zhang, et al.
0

Weakly-supervised learning is a paradigm for alleviating the scarcity of labeled data by leveraging lower-quality but larger-scale supervision signals. While existing work mainly focuses on utilizing a certain type of weak supervision, we present a probabilistic framework, learning from indirect observations, for learning from a wide range of weak supervision in real-world problems, e.g., noisy labels, complementary labels and coarse-grained labels. We propose a general method based on the maximum likelihood principle, which has desirable theoretical properties and can be straightforwardly implemented for deep neural networks. Concretely, a discriminative model for the true target is used for modeling the indirect observation, which is a random variable entirely depending on the true target stochastically or deterministically. Then, maximizing the likelihood given indirect observations leads to an estimator of the true target implicitly. Comprehensive experiments for two novel problem settings — learning from multiclass label proportions and learning from coarse-grained labels, illustrate practical usefulness of our method and demonstrate how to integrate various sources of weak supervision.

READ FULL TEXT
research
10/07/2021

Creating Training Sets via Weak Indirect Supervision

Creating labeled training sets has become one of the major roadblocks in...
research
06/15/2020

Learnability with Indirect Supervision Signals

Learning from indirect supervision signals is important in real-world AI...
research
08/10/2016

Estimation from Indirect Supervision with Linear Moments

In structured prediction problems where we have indirect supervision of ...
research
04/14/2020

Learning from Aggregate Observations

We study the problem of learning from aggregate observations where super...
research
04/26/2020

Physics-constrained indirect supervised learning

This study proposes a supervised learning method that does not rely on l...
research
08/26/2018

Deep Probabilistic Logic: A Unifying Framework for Indirect Supervision

Deep learning has emerged as a versatile tool for a wide range of NLP ta...
research
11/30/2017

Learning to Learn from Weak Supervision by Full Supervision

In this paper, we propose a method for training neural networks when we ...

Please sign up or login with your details

Forgot password? Click here to reset