Neuro-Symbolic Entropy Regularization

01/25/2022
by   Kareem Ahmed, et al.
0

In structured prediction, the goal is to jointly predict many output variables that together encode a structured object – a path in a graph, an entity-relation triple, or an ordering of objects. Such a large output space makes learning hard and requires vast amounts of labeled data. Different approaches leverage alternate sources of supervision. One approach – entropy regularization – posits that decision boundaries should lie in low-probability regions. It extracts supervision from unlabeled examples, but remains agnostic to the structure of the output space. Conversely, neuro-symbolic approaches exploit the knowledge that not every prediction corresponds to a valid structure in the output space. Yet, they does not further restrict the learned output distribution. This paper introduces a framework that unifies both approaches. We propose a loss, neuro-symbolic entropy regularization, that encourages the model to confidently predict a valid object. It is obtained by restricting entropy regularization to the distribution over only valid structures. This loss is efficiently computed when the output constraint is expressed as a tractable logic circuit. Moreover, it seamlessly integrates with other neuro-symbolic losses that eliminate invalid predictions. We demonstrate the efficacy of our approach on a series of semi-supervised and fully-supervised structured-prediction experiments, where we find that it leads to models whose predictions are more accurate and more likely to be valid.

READ FULL TEXT
research
04/23/2020

Semi-Supervised Models via Data Augmentationfor Classifying Interactive Affective Responses

We present semi-supervised models with data augmentation (SMDA), a semi-...
research
06/01/2022

Semantic Probabilistic Layers for Neuro-Symbolic Learning

We design a predictive layer for structured-output prediction (SOP) that...
research
07/15/2020

Transformation Consistency Regularization- A Semi-Supervised Paradigm for Image-to-Image Translation

Scarcity of labeled data has motivated the development of semi-supervise...
research
02/28/2023

Semantic Strengthening of Neuro-Symbolic Learning

Numerous neuro-symbolic approaches have recently been proposed typically...
research
04/28/2015

Deep Neural Networks Regularization for Structured Output Prediction

A deep neural network model is a powerful framework for learning represe...
research
12/15/2021

Taming Overconfident Prediction on Unlabeled Data from Hindsight

Minimizing prediction uncertainty on unlabeled data is a key factor to a...

Please sign up or login with your details

Forgot password? Click here to reset