Probabilistic partition of unity networks: clustering based deep approximation

07/07/2021
by   Nat Trask, et al.
45

Partition of unity networks (POU-Nets) have been shown capable of realizing algebraic convergence rates for regression and solution of PDEs, but require empirical tuning of training parameters. We enrich POU-Nets with a Gaussian noise model to obtain a probabilistic generalization amenable to gradient-based minimization of a maximum likelihood loss. The resulting architecture provides spatial representations of both noiseless and noisy data as Gaussian mixtures with closed form expressions for variance which provides an estimator of local error. The training process yields remarkably sharp partitions of input space based upon correlation of function values. This classification of training points is amenable to a hierarchical refinement strategy that significantly improves the localization of the regression, allowing for higher-order polynomial approximation to be utilized. The framework scales more favorably to large data sets as compared to Gaussian process regression and allows for spatially varying uncertainty, leveraging the expressive power of deep neural networks while bypassing expensive training associated with other probabilistic deep learning methods. Compared to standard deep neural networks, the framework demonstrates hp-convergence without the use of regularizers to tune the localization of partitions. We provide benchmarks quantifying performance in high/low-dimensions, demonstrating that convergence rates depend only on the latent dimension of data within high-dimensional space. Finally, we introduce a new open-source data set of PDE-based simulations of a semiconductor device and perform unsupervised extraction of a physically interpretable reduced-order basis.

READ FULL TEXT

page 2

page 7

page 8

page 9

research
01/27/2021

Partition of unity networks: deep hp-approximation

Approximation theorists have established best-in-class optimal approxima...
research
10/06/2022

Probabilistic partition of unity networks for high-dimensional regression problems

We explore the probabilistic partition of unity network (PPOU-Net) model...
research
07/09/2019

Convergence Rates for Gaussian Mixtures of Experts

We provide a theoretical treatment of over-specified Gaussian mixtures o...
research
11/10/2021

Collocation approximation by deep neural ReLU networks for parametric elliptic PDEs with lognormal inputs

We obtained convergence rates of the collocation approximation by deep R...
research
05/17/2022

Bagged Polynomial Regression and Neural Networks

Series and polynomial regression are able to approximate the same functi...
research
05/31/2022

Improvements to Supervised EM Learning of Shared Kernel Models by Feature Space Partitioning

Expectation maximisation (EM) is usually thought of as an unsupervised l...

Please sign up or login with your details

Forgot password? Click here to reset