Exploring End-to-End Differentiable Natural Logic Modeling

11/08/2020
by   Yufei Feng, et al.
0

We explore end-to-end trained differentiable models that integrate natural logic with neural networks, aiming to keep the backbone of natural language reasoning based on the natural logic formalism while introducing subsymbolic vector representations and neural components. The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information. Experiments show that the proposed framework can effectively model monotonicity-based reasoning, compared to the baseline neural network models without built-in inductive bias for monotonicity-based reasoning. Our proposed model shows to be robust when transferred from upward to downward inference. We perform further analyses on the performance of the proposed model on aggregation, showing the effectiveness of the proposed subcomponents on helping achieve better intermediate aggregation performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2023

NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic

Reasoning has been a central topic in artificial intelligence from the b...
research
05/18/2018

DeepLogic: End-to-End Logical Reasoning

Neural networks have been learning complex multi-hop reasoning in variou...
research
11/15/2016

A Neural Architecture Mimicking Humans End-to-End for Natural Language Inference

In this work we use the recent advances in representation learning to pr...
research
07/28/2022

Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation

Combining deep learning with symbolic logic reasoning aims to capitalize...
research
02/19/2023

Learning Language Representations with Logical Inductive Bias

Transformer architectures have achieved great success in solving natural...
research
06/14/2019

Augmenting Neural Networks with First-order Logic

Today, the dominant paradigm for training neural networks involves minim...
research
07/25/2018

Conditional Information Gain Networks

Deep neural network models owe their representational power to the high ...

Please sign up or login with your details

Forgot password? Click here to reset