Lossless Compression of Structured Convolutional Models via Lifting

07/13/2020
by   Gustav Sourek, et al.
11

Lifting is an efficient technique to scale up graphical models generalized to relational domains by exploiting the underlying symmetries. Concurrently, neural models are continuously expanding from grid-like tensor data into structured representations, such as various attributed graphs and relational databases. To address the irregular structure of the data, the models typically extrapolate on the idea of convolution, effectively introducing parameter sharing in their, dynamically unfolded, computation graphs. The computation graphs themselves then reflect the symmetries of the underlying data, similarly to the lifted graphical models. Inspired by lifting, we introduce a simple and efficient technique to detect the symmetries and compress the neural models without loss of any information. We demonstrate through experiments that such compression can lead to significant speedups of structured convolutional models, such as various Graph Neural Networks, across various tasks, such as molecule classification and knowledge-base completion.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2016

Relational Models

We provide a survey on relational models. Relational models describe com...
research
07/13/2020

Beyond Graph Neural Networks with Lifted Relational Neural Networks

We demonstrate a declarative differentiable programming framework based ...
research
07/25/2011

Lifted Graphical Models: A Survey

This article presents a survey of work on lifted graphical models. We re...
research
11/09/2018

Deep Compression of Sum-Product Networks on Tensor Networks

Sum-product networks (SPNs) represent an emerging class of neural networ...
research
01/20/2021

Directed Acyclic Graph Neural Networks

Graph-structured data ubiquitously appears in science and engineering. G...
research
02/13/2019

Classifying Signals on Irregular Domains via Convolutional Cluster Pooling

We present a novel and hierarchical approach for supervised classificati...
research
02/08/2017

Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization

A fundamental advantage of neural models for NLP is their ability to lea...

Please sign up or login with your details

Forgot password? Click here to reset