On Training Bi-directional Neural Network Language Model with Noise Contrastive Estimation

02/19/2016
by   Tianxing He, et al.
0

We propose to train bi-directional neural network language model(NNLM) with noise contrastive estimation(NCE). Experiments are conducted on a rescore task on the PTB data set. It is shown that NCE-trained bi-directional NNLM outperformed the one trained by conventional maximum likelihood training. But still(regretfully), it did not out-perform the baseline uni-directional NNLM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2018

Improved training of neural trans-dimensional random field language models with dynamic noise-contrastive estimation

A new whole-sentence language model - neural trans-dimensional random fi...
research
05/25/2021

Bi-objective Search with Bi-directional A*

Bi-objective search is a well-known algorithmic problem, concerned with ...
research
08/18/2020

Complementary Language Model and Parallel Bi-LRNN for False Trigger Mitigation

False triggers in voice assistants are unintended invocations of the ass...
research
02/22/2023

Guiding Large Language Models via Directional Stimulus Prompting

We introduce a new framework, Directional Stimulus Prompting, that uses ...
research
03/02/2021

Unsupervised Word Segmentation with Bi-directional Neural Language Model

We present an unsupervised word segmentation model, in which the learnin...
research
12/31/2018

Deep Frame Prediction for Video Coding

We propose a novel frame prediction method using a deep neural network (...
research
03/03/2020

Towards Real-time Mispronunciation Detection in Kids' Speech

Modern mispronunciation detection and diagnosis systems have seen signif...

Please sign up or login with your details

Forgot password? Click here to reset