Constructive Type-Logical Supertagging with Self-Attention Networks

05/31/2019
by   Konstantinos Kogkalidis, et al.
0

We propose a novel application of self-attention networks towards grammar induction. We present an attention-based supertagger for a refined type-logical grammar, trained on constructing types inductively. In addition to achieving a high overall type accuracy, our model is able to learn the syntax of the grammar's type system along with its denotational semantics. This lifts the closed world assumption commonly made by lexicalized grammar supertaggers, greatly enhancing its generalization potential. This is evidenced both by its adequate accuracy over sparse word types and its ability to correctly construct complex types never seen during training, which, to the best of our knowledge, was as of yet unaccomplished.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2016

The Grail theorem prover: Type theory for syntax and semantics

As the name suggests, type-logical grammars are a grammar formalism base...
research
09/06/2019

Extracting and Learning a Dependency-Enhanced Type Lexicon for Dutch

This thesis is concerned with type-logical grammars and their practical ...
research
04/28/2023

A logical word embedding for learning grammar

We introduce the logical grammar emdebbing (LGE), a model inspired by pr...
research
10/25/2018

Teaching Syntax by Adversarial Distraction

Existing entailment datasets mainly pose problems which can be answered ...
research
10/15/2020

Montague Grammar Induction

We propose a computational modeling framework for inducing combinatory c...
research
07/06/2020

A Mathematical Theory of Attention

Attention is a powerful component of modern neural networks across a wid...
research
08/16/2023

Benchmarking Neural Network Generalization for Grammar Induction

How well do neural networks generalize? Even for grammar induction tasks...

Please sign up or login with your details

Forgot password? Click here to reset