Novel Word Embedding and Translation-based Language Modeling for Extractive Speech Summarization

07/22/2016
by   Kuan-Yu Chen, et al.
0

Word embedding methods revolve around learning continuous distributed vector representations of words with neural networks, which can capture semantic and/or syntactic cues, and in turn be used to induce similarity measures among words, sentences and documents in context. Celebrated methods can be categorized as prediction-based and count-based methods according to the training objectives and model architectures. Their pros and cons have been extensively analyzed and evaluated in recent studies, but there is relatively less work continuing the line of research to develop an enhanced learning method that brings together the advantages of the two model families. In addition, the interpretation of the learned word representations still remains somewhat opaque. Motivated by the observations and considering the pressing need, this paper presents a novel method for learning the word representations, which not only inherits the advantages of classic word embedding methods but also offers a clearer and more rigorous interpretation of the learned word representations. Built upon the proposed word embedding method, we further formulate a translation-based language modeling framework for the extractive speech summarization task. A series of empirical evaluations demonstrate the effectiveness of the proposed word representation learning and language modeling techniques in extractive speech summarization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2020

Attention Word Embedding

Word embedding models learn semantically rich vector representations of ...
research
06/14/2015

Leveraging Word Embeddings for Spoken Document Summarization

Owing to the rapidly growing multimedia content available on the Interne...
research
12/28/2019

Learning Numeral Embeddings

Word embedding is an essential building block for deep learning methods ...
research
11/22/2016

Learning to Distill: The Essence Vector Modeling Framework

In the context of natural language processing, representation learning h...
research
05/21/2019

Enhancing Domain Word Embedding via Latent Semantic Imputation

We present a novel method named Latent Semantic Imputation (LSI) to tran...
research
02/15/2014

word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method

The word2vec software of Tomas Mikolov and colleagues (https://code.goog...
research
06/24/2021

A comprehensive empirical analysis on cross-domain semantic enrichment for detection of depressive language

We analyze the process of creating word embedding feature representation...

Please sign up or login with your details

Forgot password? Click here to reset