DeepAI AI Chat
Log In Sign Up

Improving Semantic Composition with Offset Inference

by   Thomas Kober, et al.
University of Sussex

Count-based distributional semantic models suffer from sparsity due to unobserved but plausible co-occurrences in any text collection. This problem is amplified for models like Anchored Packed Trees (APTs), that take the grammatical type of a co-occurrence into account. We therefore introduce a novel form of distributional inference that exploits the rich type structure in APTs and infers missing data by the same mechanism that is used for semantic composition.


page 1

page 2

page 3

page 4


Improving Sparse Word Representations with Distributional Inference for Semantic Composition

Distributional models are derived from co-occurrences in a corpus, where...

Aligning Packed Dependency Trees: a theory of composition for distributional semantics

We present a new framework for compositional distributional semantics in...

Modeling Semantic Plausibility by Injecting World Knowledge

Distributional data tells us that a man can swallow candy, but not that ...

Autoencoding Pixies: Amortised Variational Inference with Graph Convolutions for Functional Distributional Semantics

Functional Distributional Semantics provides a linguistically interpreta...

Composing and Embedding the Words-as-Classifiers Model of Grounded Semantics

The words-as-classifiers model of grounded lexical semantics learns a se...

Learning Distributional Programs for Relational Autocompletion

Relational autocompletion is the problem of automatically filling out so...

Code Repositories


Code and resources of the ACL 2017 paper "Improving Semantic Composition with Offset Inference"

view repo