Accelerating Message Passing for MAP with Benders Decomposition

05/13/2018
by   Julian Yarkony, et al.
0

We introduce a novel mechanism to tighten the local polytope relaxation for MAP inference in Markov random fields with low state space variables. We consider a surjection of the variables to a set of hyper-variables and apply the local polytope relaxation over these hyper-variables. The state space of each individual hyper-variable is constructed to be enumerable while the vector product of pairs is not easily enumerable making message passing inference intractable. To circumvent the difficulty of enumerating the vector product of state spaces of hyper-variables we introduce a novel Benders decomposition approach. This produces an upper envelope describing the message constructed from affine functions of the individual variables that compose the hyper-variable receiving the message. The envelope is tight at the minimizers which are shared by the true message. Benders rows are constructed to be Pareto optimal and are generated using an efficient procedure targeted for binary problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2020

Accelerated Message Passing for Entropy-Regularized MAP Inference

Maximum a posteriori (MAP) inference in discrete-valued Markov random fi...
research
02/17/2010

Message-Passing Algorithms: Reparameterizations and Splittings

The max-product algorithm, a local message-passing scheme that attempts ...
research
02/14/2012

Message-Passing Algorithms for Quadratic Programming Formulations of MAP Estimation

Computing maximum a posteriori (MAP) estimation in graphical models is a...
research
02/14/2012

Distributed Anytime MAP Inference

We present a distributed anytime algorithm for performing MAP inference ...
research
05/23/2019

Replicated Vector Approximate Message Passing For Resampling Problem

Resampling techniques are widely used in statistical inference and ensem...
research
11/29/2022

A two-scale solver for linear elasticity problems in the context of parallel message passing

This paper pushes further the intrinsic capabilities of the GFEM^gl glob...
research
09/09/2020

Generalizing Complex/Hyper-complex Convolutions to Vector Map Convolutions

We show that the core reasons that complex and hypercomplex valued neura...

Please sign up or login with your details

Forgot password? Click here to reset