DeepAI AI Chat
Log In Sign Up

Relaxed Scheduling for Scalable Belief Propagation

by   Vitaly Aksenov, et al.

The ability to leverage large-scale hardware parallelism has been one of the key enablers of the accelerated recent progress in machine learning. Consequently, there has been considerable effort invested into developing efficient parallel variants of classic machine learning algorithms. However, despite the wealth of knowledge on parallelization, some classic machine learning algorithms often prove hard to parallelize efficiently while maintaining convergence. In this paper, we focus on efficient parallel algorithms for the key machine learning task of inference on graphical models, in particular on the fundamental belief propagation algorithm. We address the challenge of efficiently parallelizing this classic paradigm by showing how to leverage scalable relaxed schedulers in this context. We present an extensive empirical study, showing that our approach outperforms previous parallel belief propagation implementations both in terms of scalability and in terms of wall-clock convergence time, on a range of practical applications.


page 1

page 2

page 3

page 4


Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms

There has been significant progress in understanding the parallelism inh...

SparCML: High-Performance Sparse Communication for Machine Learning

One of the main drivers behind the rapid recent advances in machine lear...

On the Geometry of Message Passing Algorithms for Gaussian Reciprocal Processes

Reciprocal processes are acausal generalizations of Markov processes int...

Belief Propagation for Structured Decision Making

Variational inference algorithms such as belief propagation have had tre...

Learning structured approximations of operations research problems

The design of algorithms that leverage machine learning alongside combin...

A PARTAN-Accelerated Frank-Wolfe Algorithm for Large-Scale SVM Classification

Frank-Wolfe algorithms have recently regained the attention of the Machi...

Dual Decomposition from the Perspective of Relax, Compensate and then Recover

Relax, Compensate and then Recover (RCR) is a paradigm for approximate i...