DeepAI AI Chat
Log In Sign Up

Towards Dynamic Computation Graphs via Sparse Latent Structure

09/03/2018
by   Vlad Niculae, et al.
Unbabel Inc.
cornell university
0

Deep NLP models benefit from underlying structures in the data---e.g., parse trees---typically extracted using off-the-shelf parsers. Recent attempts to jointly learn the latent structure encounter a tradeoff: either make factorization assumptions that limit expressiveness, or sacrifice end-to-end differentiability. Using the recently proposed SparseMAP inference, which retrieves a sparse distribution over latent structures, we propose a novel approach for end-to-end learning of latent structure predictors jointly with a downstream predictor. To the best of our knowledge, our method is the first to enable unrestricted dynamic computation graph construction from the global latent structure, while maintaining differentiability.

READ FULL TEXT
06/04/2018

On estimation and inference in latent structure random graphs

We define a latent structure model (LSM) random graph as a random dot pr...
06/24/2019

Learning Latent Trees with Stochastic Perturbations and Differentiable Dynamic Programming

We treat projective dependency trees as latent variables in our probabil...
01/03/2022

Learning with Latent Structures in Natural Language Processing: A Survey

While end-to-end learning with fully differentiable models has enabled t...
10/05/2020

Understanding the Mechanics of SPIGOT: Surrogate Gradients for Latent Structure Learning

Latent structure models are a powerful tool for modeling language data: ...
07/07/2020

GraphOpt: Learning Optimization Models of Graph Formation

Formation mechanisms are fundamental to the study of complex networks, b...
06/01/2020

Latent Domain Learning with Dynamic Residual Adapters

A practical shortcoming of deep neural networks is their specialization ...

Code Repositories

sparsemap

SparseMAP: differentiable sparse structure inference


view repo