Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

by   Kamal Al-Sabahi, et al.

Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, much attention has been paid to Automatic Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding, distributed representation of words, has shown an excellent performance that allows words to match on semantic level. Naively concatenating word embeddings makes the common word dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the input matrix of Latent Semantic Analysis method. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. The new weighting schemes are modified versions of the augment weight and the entropy frequency. The new schemes combine the strength of the traditional weighting schemes and word embedding. The proposed approach is experimentally evaluated on three well-known English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization for English. The proposed model performs comprehensively better compared to the state-of-the-art methods, by at least 1 that it provides a better document representation and a better document summary as a result.



There are no comments yet.


page 1

page 2

page 3

page 4


An Enhanced Latent Semantic Analysis Approach for Arabic Document Summarization

The fast-growing amount of information on the Internet makes the researc...

Leveraging Word Embeddings for Spoken Document Summarization

Owing to the rapidly growing multimedia content available on the Interne...

Vector of Locally-Aggregated Word Embeddings (VLAWE): A novel document-level embedding

In this paper, we propose a novel representation for text documents base...

Efficient Vector Representation for Documents through Corruption

We present an efficient document representation learning framework, Docu...

Contextually Propagated Term Weights for Document Representation

Word embeddings predict a word from its neighbours by learning small, de...

Extending Text Informativeness Measures to Passage Interestingness Evaluation (Language Model vs. Word Embedding)

Standard informativeness measures used to evaluate Automatic Text Summar...

Playing Codenames with Language Graphs and Word Embeddings

Although board games and video games have been studied for decades in ar...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.