Collaborative Training of Tensors for Compositional Distributional Semantics

07/08/2016
by   Tamara Polajnar, et al.
0

Type-based compositional distributional semantic models present an interesting line of research into functional representations of linguistic meaning. One of the drawbacks of such models, however, is the lack of training data required to train each word-type combination. In this paper we address this by introducing training methods that share parameters between similar words. We show that these methods enable zero-shot learning for words that have no training data at all, as well as enabling construction of high-quality tensors from very few training examples per word.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2019

A Typedriven Vector Semantics for Ellipsis with Anaphora using Lambek Calculus with Limited Contraction

We develop a vector space semantics for verb phrase ellipsis with anapho...
research
10/12/2020

Modelling Lexical Ambiguity with Density Matrices

Words can have multiple senses. Compositional distributional models of m...
research
11/01/2018

Exploring Semantic Incrementality with Dynamic Syntax and Vector Space Semantics

One of the fundamental requirements for models of semantic processing in...
research
10/15/2021

On The Ingredients of an Effective Zero-shot Semantic Parser

Semantic parsers map natural language utterances into meaning representa...
research
05/11/2020

Towards logical negation for compositional distributional semantics

The categorical compositional distributional model of meaning gives the ...
research
01/15/2019

Investigating Antigram Behaviour using Distributional Semantics

Language is an extremely interesting subject to study, each day presenti...
research
12/14/2016

Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection

The fundamental role of hypernymy in NLP has motivated the development o...

Please sign up or login with your details

Forgot password? Click here to reset