DeepAI AI Chat
Log In Sign Up

Improving Semantic Composition with Offset Inference

04/21/2017
by   Thomas Kober, et al.
University of Sussex
0

Count-based distributional semantic models suffer from sparsity due to unobserved but plausible co-occurrences in any text collection. This problem is amplified for models like Anchored Packed Trees (APTs), that take the grammatical type of a co-occurrence into account. We therefore introduce a novel form of distributional inference that exploits the rich type structure in APTs and infers missing data by the same mechanism that is used for semantic composition.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/24/2016

Improving Sparse Word Representations with Distributional Inference for Semantic Composition

Distributional models are derived from co-occurrences in a corpus, where...
08/25/2016

Aligning Packed Dependency Trees: a theory of composition for distributional semantics

We present a new framework for compositional distributional semantics in...
04/02/2018

Modeling Semantic Plausibility by Injecting World Knowledge

Distributional data tells us that a man can swallow candy, but not that ...
05/06/2020

Autoencoding Pixies: Amortised Variational Inference with Graph Convolutions for Functional Distributional Semantics

Functional Distributional Semantics provides a linguistically interpreta...
11/08/2019

Composing and Embedding the Words-as-Classifiers Model of Grounded Semantics

The words-as-classifiers model of grounded lexical semantics learns a se...
01/23/2020

Learning Distributional Programs for Relational Autocompletion

Relational autocompletion is the problem of automatically filling out so...

Code Repositories

acl2017

Code and resources of the ACL 2017 paper "Improving Semantic Composition with Offset Inference"


view repo