Language Models with Pre-Trained (GloVe) Word Embeddings

10/12/2016
by   Victor Makarenkov, et al.
0

In this work we implement a training of a Language Model (LM), using Recurrent Neural Network (RNN) and GloVe word embeddings, introduced by Pennigton et al. in [1]. The implementation is following the general idea of training RNNs for LM tasks presented in [2], but is rather using Gated Recurrent Unit (GRU) [3] for a memory cell, and not the more commonly used LSTM [4].

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2021

Impact of Gender Debiased Word Embeddings in Language Modeling

Gender, race and social biases have recently been detected as evident ex...
research
06/13/2019

Character n-gram Embeddings to Improve RNN Language Models

This paper proposes a novel Recurrent Neural Network (RNN) language mode...
research
07/17/2017

A Simple Language Model based on PMI Matrix Approximations

In this study, we introduce a new approach for learning language models ...
research
06/09/2021

Case Studies on using Natural Language Processing Techniques in Customer Relationship Management Software

How can a text corpus stored in a customer relationship management (CRM)...
research
08/27/2018

Predefined Sparseness in Recurrent Sequence Models

Inducing sparseness while training neural networks has been shown to yie...
research
09/05/2018

Sentylic at IEST 2018: Gated Recurrent Neural Network and Capsule Network Based Approach for Implicit Emotion Detection

In this paper, we present the system we have used for the Implicit WASSA...
research
09/06/2017

A Neural Language Model for Dynamically Representing the Meanings of Unknown Words and Entities in a Discourse

This study addresses the problem of identifying the meaning of unknown w...

Please sign up or login with your details

Forgot password? Click here to reset