Natural Language Processing with Small Feed-Forward Networks

08/01/2017
by   Jan A. Botha, et al.
0

We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset