Faster-LTN: a neuro-symbolic, end-to-end object detection architecture

by   Lia Morra, et al.

The detection of semantic relationships between objects represented in an image is one of the fundamental challenges in image interpretation. Neural-Symbolic techniques, such as Logic Tensor Networks (LTNs), allow the combination of semantic knowledge representation and reasoning with the ability to efficiently learn from examples typical of neural networks. We here propose Faster-LTN, an object detector composed of a convolutional backbone and an LTN. To the best of our knowledge, this is the first attempt to combine both frameworks in an end-to-end training setting. This architecture is trained by optimizing a grounded theory which combines labelled examples with prior knowledge, in the form of logical axioms. Experimental comparisons show competitive performance with respect to the traditional Faster R-CNN architecture.


page 1

page 2

page 3

page 4


Logic Tensor Networks for Semantic Image Interpretation

Semantic Image Interpretation (SII) is the task of extracting structured...

PROTOtypical Logic Tensor Networks (PROTO-LTN) for Zero Shot Learning

Semantic image interpretation can vastly benefit from approaches that co...

Oriented Object Detection with Transformer

Object detection with Transformers (DETR) has achieved a competitive per...

Abductive Knowledge Induction From Raw Data

For many reasoning-heavy tasks, it is challenging to find an appropriate...

Neural Networks Enhancement through Prior Logical Knowledge

In the recent past, there has been a growing interest in Neural-Symbolic...

Representing Prior Knowledge Using Randomly, Weighted Feature Networks for Visual Relationship Detection

The single-hidden-layer Randomly Weighted Feature Network (RWFN) introdu...

Please sign up or login with your details

Forgot password? Click here to reset