DeepAI AI Chat
Log In Sign Up

Weakly Supervised Learning Creates a Fusion of Modeling Cultures

by   Chengliang Tang, et al.

The past two decades have witnessed the great success of the algorithmic modeling framework advocated by Breiman et al. (2001). Nevertheless, the excellent prediction performance of these black-box models rely heavily on the availability of strong supervision, i.e. a large set of accurate and exact ground-truth labels. In practice, strong supervision can be unavailable or expensive, which calls for modeling techniques under weak supervision. In this comment, we summarize the key concepts in weakly supervised learning and discuss some recent developments in the field. Using algorithmic modeling alone under a weak supervision might lead to unstable and misleading results. A promising direction would be integrating the data modeling culture into such a framework.


page 1

page 2

page 3

page 4


Weakly Supervised Label Learning Flows

Supervised learning usually requires a large amount of labelled data. Ho...

Bandit Label Inference for Weakly Supervised Learning

The scarcity of data annotated at the desired level of granularity is a ...

Weakly-supervised Action Localization with Background Modeling

We describe a latent approach that learns to detect actions in long sequ...

Deep GEM-Based Network for Weakly Supervised UWB Ranging Error Mitigation

Ultra-wideband (UWB)-based techniques, while becoming mainstream approac...

A New Benchmark and Progress Toward Improved Weakly Supervised Learning

Knowledge Matters: Importance of Prior Information for Optimization [7],...

More Supervision, Less Computation: Statistical-Computational Tradeoffs in Weakly Supervised Learning

We consider the weakly supervised binary classification problem where th...

Cross-task weakly supervised learning from instructional videos

In this paper we investigate learning visual models for the steps of ord...