Aggregating Dependent Gaussian Experts in Local Approximation

10/17/2020
by   Hamed Jalali, et al.
0

Distributed Gaussian processes (DGPs) are prominent local approximation methods to scale Gaussian processes (GPs) to large datasets. Instead of a global estimation, they train local experts by dividing the training set into subsets, thus reducing the time complexity. This strategy is based on the conditional independence assumption, which basically means that there is a perfect diversity between the local experts. In practice, however, this assumption is often violated, and the aggregation of experts leads to sub-optimal and inconsistent solutions. In this paper, we propose a novel approach for aggregating the Gaussian experts by detecting strong violations of conditional independence. The dependency between experts is determined by using a Gaussian graphical model, which yields the precision matrix. The precision matrix encodes conditional dependencies between experts and is used to detect strongly dependent experts and construct an improved aggregation. Using both synthetic and real datasets, our experimental evaluations illustrate that our new method outperforms other state-of-the-art (SOTA) DGP approaches while being substantially more time-efficient than SOTA approaches, which build on independent experts.

READ FULL TEXT
research
02/07/2022

Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes

Distributed Gaussian process (DGP) is a popular approach to scale GP to ...
research
02/02/2021

Gaussian Experts Selection using Graphical Models

Local approximations are popular methods to scale Gaussian processes (GP...
research
11/17/2022

Expert Selection in Distributed Gaussian Processes: A Multi-label Classification Approach

By distributing the training process, local approximation reduces the co...
research
12/17/2021

Correlated Product of Experts for Sparse Gaussian Process Regression

Gaussian processes (GPs) are an important tool in machine learning and s...
research
10/20/2015

Unsupervised Ensemble Learning with Dependent Classifiers

In unsupervised ensemble learning, one obtains predictions from multiple...
research
02/09/2023

Gaussian Process-Gated Hierarchical Mixtures of Experts

In this paper, we propose novel Gaussian process-gated hierarchical mixt...
research
11/01/2019

Statistical Model Aggregation via Parameter Matching

We consider the problem of aggregating models learned from sequestered, ...

Please sign up or login with your details

Forgot password? Click here to reset