Deep Learning with Logical Constraints

05/01/2022
by   Eleonora Giunchiglia, et al.
0

In recent years, there has been an increasing interest in exploiting logically specified background knowledge in order to obtain neural models (i) with a better performance, (ii) able to learn from less data, and/or (iii) guaranteed to be compliant with the background knowledge itself, e.g., for safety-critical applications. In this survey, we retrace such works and categorize them based on (i) the logical language that they use to express the background knowledge and (ii) the goals that they achieve.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2022

ROAD-R: The Autonomous Driving Dataset with Logical Requirements

Neural networks have proven to be very powerful at computer vision tasks...
research
05/24/2017

Logic Tensor Networks for Semantic Image Interpretation

Semantic Image Interpretation (SII) is the task of extracting structured...
research
04/23/2023

System III: Learning with Domain Knowledge for Safety Constraints

Reinforcement learning agents naturally learn from extensive exploration...
research
11/08/2021

Visual Question Answering based on Formal Logic

Visual question answering (VQA) has been gaining a lot of traction in th...
research
05/15/2018

Digitalized Responsive Logical Interface Application

The quest for proper protection of data in the ERU and its accessibility...
research
12/23/2022

A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference

We study the problem of combining neural networks with symbolic reasonin...
research
09/06/2022

Scalable Regularization of Scene Graph Generation Models using Symbolic Theories

Several techniques have recently aimed to improve the performance of dee...

Please sign up or login with your details

Forgot password? Click here to reset