Interpretable Structure-Evolving LSTM

03/08/2017
by   Xiaodan Liang, et al.
0

This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks.

READ FULL TEXT

page 5

page 8

research
03/12/2018

From Nodes to Networks: Evolving Recurrent Neural Networks

Gated recurrent networks such as those composed of Long Short-Term Memor...
research
03/23/2016

Semantic Object Parsing with Graph LSTM

By taking the semantic object parsing task as an exemplar application sc...
research
06/05/2015

Visualizing and Understanding Recurrent Networks

Recurrent Neural Networks (RNNs), and specifically a variant with Long S...
research
11/14/2015

Semantic Object Parsing with Local-Global Long Short-Term Memory

Semantic object parsing is a fundamental task for understanding objects ...
research
05/12/2021

Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay

Sequential information contains short- to long-range dependencies; howev...
research
10/02/2019

Graph Generation with Variational Recurrent Neural Network

Generating graph structures is a challenging problem due to the diverse ...
research
08/22/2023

Transformers for Capturing Multi-level Graph Structure using Hierarchical Distances

Graph transformers need strong inductive biases to derive meaningful att...

Please sign up or login with your details

Forgot password? Click here to reset