Explaining away ambiguity: Learning verb selectional preference with Bayesian networks

08/22/2000
by   Massimiliano Ciaramita, et al.
0

This paper presents a Bayesian model for unsupervised learning of verb selectional preferences. For each verb the model creates a Bayesian network whose architecture is determined by the lexical hierarchy of Wordnet and whose parameters are estimated from a list of verb-object pairs found from a corpus. "Explaining away", a well-known property of Bayesian networks, helps the model deal in a natural fashion with word sense ambiguity in the training data. On a word sense disambiguation test our model performed better than other state of the art systems for unsupervised learning of selectional preferences. Computational complexity problems, ways of improving this approach and methods for implementing "explaining away" in other graphical frameworks are discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2013

Unsupervised Learning of Noisy-Or Bayesian Networks

This paper considers the problem of learning the parameters in Bayesian ...
research
06/10/2018

Unsupervised Disambiguation of Syncretism in Inflected Lexicons

Lexical ambiguity makes it difficult to compute various useful statistic...
research
02/01/2020

A Tutorial on Learning With Bayesian Networks

A Bayesian network is a graphical model that encodes probabilistic relat...
research
01/16/2014

Most Relevant Explanation in Bayesian Networks

A major inference task in Bayesian networks is explaining why some varia...
research
04/15/2017

MUSE: Modularizing Unsupervised Sense Embeddings

This paper proposes to address the word sense ambiguity issue in an unsu...
research
05/09/2017

Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation

We address the problem of localisation of objects as bounding boxes in i...

Please sign up or login with your details

Forgot password? Click here to reset