DeepAI AI Chat
Log In Sign Up

Learning with Latent Structures in Natural Language Processing: A Survey

01/03/2022
by   Zhaofeng Wu, et al.
University of Washington
0

While end-to-end learning with fully differentiable models has enabled tremendous success in natural language process (NLP) and machine learning, there have been significant recent interests in learning with latent discrete structures to incorporate better inductive biases for improved end-task performance and better interpretability. This paradigm, however, is not straightforwardly amenable to the mainstream gradient-based optimization methods. This work surveys three main families of methods to learn such models: surrogate gradients, continuous relaxation, and marginal likelihood maximization via sampling. We conclude with a review of applications of these methods and an inspection of the learned latent structure that they induce.

READ FULL TEXT
06/10/2021

Modeling Hierarchical Structures with Continuous Recursive Neural Networks

Recursive Neural Networks (RvNNs), which compose sequences according to ...
01/18/2023

Discrete Latent Structure in Neural Networks

Many types of data from fields including natural language processing, co...
12/17/2020

Continual Lifelong Learning in Natural Language Processing: A Survey

Continual learning (CL) aims to enable information systems to learn from...
09/03/2018

Towards Dynamic Computation Graphs via Sparse Latent Structure

Deep NLP models benefit from underlying structures in the data---e.g., p...
10/05/2020

Understanding the Mechanics of SPIGOT: Surrogate Gradients for Latent Structure Learning

Latent structure models are a powerful tool for modeling language data: ...
09/30/2020

Learning Rewards from Linguistic Feedback

We explore unconstrained natural language feedback as a learning signal ...
02/12/2018

SparseMAP: Differentiable Sparse Structured Inference

Structured prediction requires searching over a combinatorial number of ...