Deep Probabilistic Logic: A Unifying Framework for Indirect Supervision

08/26/2018
by   Hai Wang, et al.
0

Deep learning has emerged as a versatile tool for a wide range of NLP tasks, due to its superior capacity in representation learning. But its applicability is limited by the reliance on annotated examples, which are difficult to produce at scale. Indirect supervision has emerged as a promising direction to address this bottleneck, either by introducing labeling functions to automatically generate noisy examples from unlabeled text, or by imposing constraints over interdependent label decisions. A plethora of methods have been proposed, each with respective strengths and limitations. Probabilistic logic offers a unifying language to represent indirect supervision, but end-to-end modeling with probabilistic logic is often infeasible due to intractable inference and learning. In this paper, we propose deep probabilistic logic (DPL) as a general framework for indirect supervision, by composing probabilistic logic with deep learning. DPL models label decisions as latent variables, represents prior knowledge on their relations using weighted first-order logical formulas, and alternates between learning a deep neural network for the end task and refining uncertain formula weights for indirect supervision, using variational EM. This framework subsumes prior indirect supervision methods as special cases, and enables novel combination via infusion of rich domain and linguistic knowledge. Experiments on biomedical machine reading demonstrate the promise of this approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2021

Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning

Deep learning has proven effective for various application tasks, but it...
research
12/23/2020

Self-supervised self-supervision by combining deep learning and probabilistic logic

Labeling training examples at scale is a perennial challenge in machine ...
research
10/07/2021

Creating Training Sets via Weak Indirect Supervision

Creating labeled training sets has become one of the major roadblocks in...
research
10/10/2019

Learning from Indirect Observations

Weakly-supervised learning is a paradigm for alleviating the scarcity of...
research
06/15/2020

Learnability with Indirect Supervision Signals

Learning from indirect supervision signals is important in real-world AI...
research
06/01/2023

Parallel Neurosymbolic Integration with Concordia

Parallel neurosymbolic architectures have been applied effectively in NL...
research
07/30/2016

World Knowledge as Indirect Supervision for Document Clustering

One of the key obstacles in making learning protocols realistic in appli...

Please sign up or login with your details

Forgot password? Click here to reset