Syntactically Informed Text Compression with Recurrent Neural Networks

08/08/2016
by   David Cox, et al.
0

We present a self-contained system for constructing natural language models for use in text compression. Our system improves upon previous neural network based models by utilizing recent advances in syntactic parsing -- Google's SyntaxNet -- to augment character-level recurrent neural networks. RNNs have proven exceptional in modeling sequence data such as text, as their architecture allows for modeling of long-term contextual information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2017

Recent Advances in Recurrent Neural Networks

Recurrent neural networks (RNNs) are capable of learning features and lo...
research
05/16/2018

Learning to Write with Cooperative Discriminators

Recurrent Neural Networks (RNNs) are powerful autoregressive sequence mo...
research
06/22/2019

Evaluating Computational Language Models with Scaling Properties of Natural Language

In this article, we evaluate computational models of natural language wi...
research
11/20/2018

DeepZip: Lossless Data Compression using Recurrent Neural Networks

Sequential data is being generated at an unprecedented pace in various f...
research
03/06/2018

Learning Memory Access Patterns

The explosion in workload complexity and the recent slow-down in Moore's...
research
02/26/2019

Syntactic Recurrent Neural Network for Authorship Attribution

Writing style is a combination of consistent decisions at different leve...
research
11/05/2019

Improving Long Handwritten Text Line Recognition with Convolutional Multi-way Associative Memory

Convolutional Recurrent Neural Networks (CRNNs) excel at scene text reco...

Please sign up or login with your details

Forgot password? Click here to reset