Overestimation learning with guarantees

01/26/2021
by   Adrien Gauffriau, et al.
0

We describe a complete method that learns a neural network which is guaranteed to overestimate a reference function on a given domain. The neural network can then be used as a surrogate for the reference function. The method involves two steps. In the first step, we construct an adaptive set of Majoring Points. In the second step, we optimize a well-chosen neural network to overestimate the Majoring Points. In order to extend the guarantee on the Majoring Points to the whole domain, we necessarily have to make an assumption on the reference function. In this study, we assume that the reference function is monotonic. We provide experiments on synthetic and real problems. The experiments show that the density of the Majoring Points concentrate where the reference function varies. The learned over-estimations are both guaranteed to overestimate the reference function and are proven empirically to provide good approximations of it. Experiments on real data show that the method makes it possible to use the surrogate function in embedded systems for which an underestimation is critical; when computing the reference function requires too many resources.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2020

The efficiency of deep learning algorithms for detecting anatomical reference points on radiological images of the head profile

In this article we investigate the efficiency of deep learning algorithm...
research
09/05/2023

Employing Real Training Data for Deep Noise Suppression

Most deep noise suppression (DNS) models are trained with reference-base...
research
10/19/2017

Efficient Robust Matrix Factorization with Nonconvex Penalties

Robust matrix factorization (RMF) is a fundamental tool with lots of app...
research
09/18/2019

Deep Model Reference Adaptive Control

We present a new neuroadaptive architecture: Deep Neural Network based M...
research
07/26/2021

Parallel Surrogate-assisted Optimization Using Mesh Adaptive Direct Search

We consider computationally expensive blackbox optimization problems and...
research
08/17/2019

Computing Linear Restrictions of Neural Networks

A linear restriction of a function is the same function with its domain ...

Please sign up or login with your details

Forgot password? Click here to reset