DeepAI AI Chat
Log In Sign Up

Natural Language Processing with Small Feed-Forward Networks

08/01/2017
by   Jan A. Botha, et al.
Google
0

We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/02/2015

A Primer on Neural Network Models for Natural Language Processing

Over the past few years, neural networks have re-emerged as powerful mac...
05/01/2017

Efficient Natural Language Response Suggestion for Smart Reply

This paper presents a computationally efficient machine-learned method f...
03/05/2020

Predicting Memory Compiler Performance Outputs using Feed-Forward Neural Networks

Typical semiconductor chips include thousands of mostly small memories. ...
07/25/2018

Conditional Information Gain Networks

Deep neural network models owe their representational power to the high ...
08/05/2020

Working Memory for Online Memory Binding Tasks: A Hybrid Model

Working Memory is the brain module that holds and manipulates informatio...
05/19/2022

Causal Discovery and Injection for Feed-Forward Neural Networks

Neural networks have proven to be effective at solving a wide range of p...
10/02/2021

Recurrent circuits as multi-path ensembles for modeling responses of early visual cortical neurons

In this paper, we showed that adding within-layer recurrent connections ...

Code Repositories

feedforward-NLP

Natural Language Processing with Small Feed-Forward Networks


view repo