Scalable Probabilistic Entity-Topic Modeling

09/02/2013
by   Neil Houlsby, et al.
0

We present an LDA approach to entity disambiguation. Each topic is associated with a Wikipedia article and topics generate either content words or entity mentions. Training such models is challenging because of the topic and vocabulary size, both in the millions. We tackle these problems using a novel distributed inference and representation framework based on a parallel Gibbs sampler guided by the Wikipedia link graph, and pipelines of MapReduce allowing fast and memory-frugal processing of large datasets. We report state-of-the-art performance on a public dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2017

Polya Urn Latent Dirichlet Allocation: a doubly sparse massively parallel sampler

Latent Dirichlet Allocation (LDA) is a topic model widely used in natura...
research
06/11/2015

Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models

Topic models, and more specifically the class of Latent Dirichlet Alloca...
research
06/13/2019

Topic Modeling via Full Dependence Mixtures

We consider the topic modeling problem for large datasets. For this prob...
research
12/12/2018

Temporal Analysis of Entity Relatedness and its Evolution using Wikipedia and DBpedia

Many researchers have made use of the Wikipedia network for relatedness ...
research
06/29/2021

TWAG: A Topic-Guided Wikipedia Abstract Generator

Wikipedia abstract generation aims to distill a Wikipedia abstract from ...
research
01/15/2020

VSEC-LDA: Boosting Topic Modeling with Embedded Vocabulary Selection

Topic modeling has found wide application in many problems where latent ...
research
11/10/2014

Model-Parallel Inference for Big Topic Models

In real world industrial applications of topic modeling, the ability to ...

Please sign up or login with your details

Forgot password? Click here to reset