Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

07/08/2018
by   Kamal Al-Sabahi, et al.
0

Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, much attention has been paid to Automatic Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding, distributed representation of words, has shown an excellent performance that allows words to match on semantic level. Naively concatenating word embeddings makes the common word dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the input matrix of Latent Semantic Analysis method. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. The new weighting schemes are modified versions of the augment weight and the entropy frequency. The new schemes combine the strength of the traditional weighting schemes and word embedding. The proposed approach is experimentally evaluated on three well-known English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization for English. The proposed model performs comprehensively better compared to the state-of-the-art methods, by at least 1 that it provides a better document representation and a better document summary as a result.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/31/2018

An Enhanced Latent Semantic Analysis Approach for Arabic Document Summarization

The fast-growing amount of information on the Internet makes the researc...
06/14/2015

Leveraging Word Embeddings for Spoken Document Summarization

Owing to the rapidly growing multimedia content available on the Interne...
02/23/2019

Vector of Locally-Aggregated Word Embeddings (VLAWE): A novel document-level embedding

In this paper, we propose a novel representation for text documents base...
07/08/2017

Efficient Vector Representation for Documents through Corruption

We present an efficient document representation learning framework, Docu...
06/03/2019

Contextually Propagated Term Weights for Document Representation

Word embeddings predict a word from its neighbours by learning small, de...
04/14/2020

Extending Text Informativeness Measures to Passage Interestingness Evaluation (Language Model vs. Word Embedding)

Standard informativeness measures used to evaluate Automatic Text Summar...
05/12/2021

Playing Codenames with Language Graphs and Word Embeddings

Although board games and video games have been studied for decades in ar...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.