Natural Language Processing with Small Feed-Forward Networks

08/01/2017
by   Jan A. Botha, et al.
0

We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2015

A Primer on Neural Network Models for Natural Language Processing

Over the past few years, neural networks have re-emerged as powerful mac...
research
05/01/2017

Efficient Natural Language Response Suggestion for Smart Reply

This paper presents a computationally efficient machine-learned method f...
research
03/05/2020

Predicting Memory Compiler Performance Outputs using Feed-Forward Neural Networks

Typical semiconductor chips include thousands of mostly small memories. ...
research
07/25/2018

Conditional Information Gain Networks

Deep neural network models owe their representational power to the high ...
research
08/05/2020

Working Memory for Online Memory Binding Tasks: A Hybrid Model

Working Memory is the brain module that holds and manipulates informatio...
research
04/01/2020

Obstacle Tower Without Human Demonstrations: How Far a Deep Feed-Forward Network Goes with Reinforcement Learning

The Obstacle Tower Challenge is the task to master a procedurally genera...
research
02/08/2022

Modeling Structure with Undirected Neural Networks

Neural networks are powerful function estimators, leading to their statu...

Please sign up or login with your details

Forgot password? Click here to reset