Learning and T-Norms Theory

07/26/2019
by   Giuseppe Marra, et al.
2

Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature.

READ FULL TEXT
research
07/18/2019

On the relation between Loss Functions and T-Norms

Deep learning has been shown to achieve impressive results in several do...
research
11/29/2017

A Semantic Loss Function for Deep Learning with Symbolic Knowledge

This paper develops a novel methodology for using symbolic knowledge in ...
research
09/19/2022

Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning

In this paper, we develop upon the emerging topic of loss function learn...
research
11/22/2016

Relaxed Earth Mover's Distances for Chain- and Tree-connected Spaces and their use as a Loss Function in Deep Learning

The Earth Mover's Distance (EMD) computes the optimal cost of transformi...
research
08/14/2022

Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning

Integrating logical reasoning and machine learning by approximating logi...
research
06/15/2019

Injecting Prior Knowledge for Transfer Learning into Reinforcement Learning Algorithms using Logic Tensor Networks

Human ability at solving complex tasks is helped by priors on object and...
research
12/08/2017

Artificial Neural Networks that Learn to Satisfy Logic Constraints

Logic-based problems such as planning, theorem proving, or puzzles, typi...

Please sign up or login with your details

Forgot password? Click here to reset