Graph Sequence Learning for Premise Selection

03/27/2023
by   Edvard K. Holden, et al.
0

Premise selection is crucial for large theory reasoning as the sheer size of the problems quickly leads to resource starvation. This paper proposes a premise selection approach inspired by the domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given problem. This is achieved by combining a pre-trained graph neural network with a language model. We evaluated different configurations of our method and experience a 17.7

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2017

Cold Fusion: Training Seq2Seq Models Together with Language Models

Sequence-to-sequence (Seq2Seq) models with attention have excelled at ta...
research
05/13/2023

Pre-trained Language Model with Prompts for Temporal Knowledge Graph Completion

Temporal Knowledge graph completion (TKGC) is a crucial task that involv...
research
05/07/2015

Language Models for Image Captioning: The Quirks and What Works

Two recent approaches have achieved state-of-the-art results in image ca...
research
05/24/2023

Exploring Diverse In-Context Configurations for Image Captioning

After discovering that Language Models (LMs) can be good in-context few-...
research
09/02/2018

Chittron: An Automatic Bangla Image Captioning System

Automatic image caption generation aims to produce an accurate descripti...
research
05/24/2023

Towards Few-shot Entity Recognition in Document Images: A Graph Neural Network Approach Robust to Image Manipulation

Recent advances of incorporating layout information, typically bounding ...
research
04/26/2019

Knowing When to Stop: Evaluation and Verification of Conformity to Output-size Specifications

Models such as Sequence-to-Sequence and Image-to-Sequence are widely use...

Please sign up or login with your details

Forgot password? Click here to reset