Smaller Text Classifiers with Discriminative Cluster Embeddings

06/23/2019
by   Mingda Chen, et al.
0

Word embedding parameters often dominate overall model sizes in neural methods for natural language processing. We reduce deployed model sizes of text classifiers by learning a hard word clustering in an end-to-end manner. We use the Gumbel-Softmax distribution to maximize over the latent clustering while minimizing the task loss. We propose variations that selectively assign additional parameters to words, which further improves accuracy while still remaining parameter-efficient.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset