A Universal Approximation Result for Difference of log-sum-exp Neural Networks

05/21/2019
by   Giuseppe C. Calafiore, et al.
0

We show that a neural network whose output is obtained as the difference of the outputs of two feedforward networks with exponential activation function in the hidden layer and logarithmic activation function in the output node (LSE networks) is a smooth universal approximator of continuous functions over convex, compact sets. By using a logarithmic transform, this class of networks maps to a family of subtraction-free ratios of generalized posynomials, which we also show to be universal approximators of positive functions over log-convex, compact subsets of the positive orthant. The main advantage of Difference-LSE networks with respect to classical feedforward neural networks is that, after a standard training phase, they provide surrogate models for design that possess a specific difference-of-convex-functions form, which makes them optimizable via relatively efficient numerical methods. In particular, by adapting an existing difference-of-convex algorithm to these models, we obtain an algorithm for performing effective optimization-based design. We illustrate the proposed approach by applying it to data-driven design of a diet for a patient with type-2 diabetes.

READ FULL TEXT

page 1

page 9

research
06/20/2018

Log-sum-exp neural networks and posynomial models for convex and log-log-convex data

We show that a one-layer feedforward neural network with exponential act...
research
03/31/2021

CDiNN -Convex Difference Neural Networks

Neural networks with ReLU activation function have been shown to be univ...
research
01/17/2022

Parametrized Convex Universal Approximators for Decision-Making Problems

Parametrized max-affine (PMA) and parametrized log-sum-exp (PLSE) networ...
research
07/26/2019

Two-hidden-layer Feedforward Neural Networks are Universal Approximators: A Constructive Approach

It is well known that Artificial Neural Networks are universal approxima...
research
09/30/2021

Introducing the DOME Activation Functions

In this paper, we introduce a novel non-linear activation function that ...
research
06/25/2021

Ladder Polynomial Neural Networks

Polynomial functions have plenty of useful analytical properties, but th...
research
10/07/2021

Universal Approximation Under Constraints is Possible with Transformers

Many practical problems need the output of a machine learning model to s...

Please sign up or login with your details

Forgot password? Click here to reset