Exploring Parallelism in Learning Belief Networks

02/06/2013
by   TongSheng Chu, et al.
0

It has been shown that a class of probabilistic domain models cannot be learned correctly by several existing algorithms which employ a single-link look ahead search. When a multi-link look ahead search is used, the computational complexity of the learning algorithm increases. We study how to use parallelism to tackle the increased complexity in learning such models and to speed up learning in large domains. An algorithm is proposed to decompose the learning task for parallel processing. A further task decomposition is used to balance load among processors and to increase the speed-up and efficiency. For learning from very large datasets, we present a regrouping of the available processors such that slow data access through file can be replaced by fast memory access. Our implementation in a parallel computer demonstrates the effectiveness of the algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2013

Learning Belief Networks in Domains with Recursively Embedded Pseudo Independent Submodels

A pseudo independent (PI) model is a probabilistic domain model (PDM) wh...
research
06/21/2018

Parallel Whale Optimization Algorithm for Solving Constrained and Unconstrained Optimization Problems

Recently the engineering optimization problems require large computation...
research
03/12/2012

Decentralized, Adaptive, Look-Ahead Particle Filtering

The decentralized particle filter (DPF) was proposed recently to increas...
research
07/16/2021

Look Ahead ORAM: Obfuscating Addresses in Recommendation Model Training

In the cloud computing era, data privacy is a critical concern. Memory a...
research
10/02/2020

P = FS: Parallel is Just Fast Serial

We prove that parallel processing with homogeneous processors is logical...
research
12/11/2018

R3-DLA (Reduce, Reuse, Recycle): A More Efficient Approach to Decoupled Look-Ahead Architectures

Modern societies have developed insatiable demands for more computation ...
research
04/19/2018

Programming Parallel Dense Matrix Factorizations with Look-Ahead and OpenMP

We investigate a parallelization strategy for dense matrix factorization...

Please sign up or login with your details

Forgot password? Click here to reset