Modeling Content and Context with Deep Relational Learning

by   Maria Leonor Pacheco, et al.

Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic methods, with the expressiveness of neural networks. However, most of the existing frameworks for combining neural and symbolic representations have been designed for classic relational learning tasks that work over a universe of symbolic entities and relations. In this paper, we present DRaiL, an open-source declarative framework for specifying deep relational models, designed to support a variety of NLP scenarios. Our framework supports easy integration with expressive language encoders, and provides an interface to study the interactions between representation, inference and learning.


page 1

page 2

page 3

page 4


Learning Representations for Sub-Symbolic Reasoning

Neuro-symbolic methods integrate neural architectures, knowledge represe...

Randomized Deep Structured Prediction for Discourse-Level Processing

Expressive text encoders such as RNNs and Transformer Networks have been...

Symbolic Relational Deep Reinforcement Learning based on Graph Neural Networks

We present a novel deep reinforcement learning framework for solving rel...

Effective Integration of Symbolic and Connectionist Approaches through a Hybrid Representation

In this paper, we present our position for a neuralsymbolic integration ...

Is Neuro-Symbolic AI Meeting its Promise in Natural Language Processing? A Structured Review

Advocates for Neuro-Symbolic AI (NeSy) assert that combining deep learni...

Composing Distributed Representations of Relational Patterns

Learning distributed representations for relation instances is a central...

Knowledge Enhanced Neural Networks for relational domains

In the recent past, there has been a growing interest in Neural-Symbolic...