On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

01/29/2023
by   Shijun Zhang, et al.
0

This paper studies the expressive power of deep neural networks from the perspective of function compositions. We show that repeated compositions of a single fixed-size ReLU network can produce super expressive power. In particular, we prove by construction that ℒ_2∘g^∘ r∘ℒ_1 can approximate 1-Lipschitz continuous functions on [0,1]^d with an error 𝒪(r^-1/d), where g is realized by a fixed-size ReLU network, ℒ_1 and ℒ_2 are two affine linear maps matching the dimensions, and g^∘ r means the r-times composition of g. Furthermore, we extend such a result to generic continuous functions on [0,1]^d with the approximation error characterized by the modulus of continuity. Our results reveal that a continuous-depth network generated via a dynamical system has good approximation power even if its dynamics function is time-independent and realized by a fixed-size ReLU network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2021

ReLU Network Approximation in Terms of Intrinsic Parameters

This paper studies the approximation error of ReLU networks in terms of ...
research
06/22/2020

Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth

A new network with super approximation power is introduced. This network...
research
03/06/2017

On the Expressive Power of Overlapping Architectures of Deep Learning

Expressive efficiency refers to the relation between two architectures A...
research
05/27/2022

Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power

It is well-known that modern neural networks are vulnerable to adversari...
research
10/31/2016

Tensor Switching Networks

We present a novel neural network algorithm, the Tensor Switching (TS) n...
research
05/03/2017

Quantified advantage of discontinuous weight selection in approximations with deep neural networks

We consider approximations of 1D Lipschitz functions by deep ReLU networ...
research
02/12/2021

ReLU Neural Networks for Exact Maximum Flow Computation

Understanding the great empirical success of artificial neural networks ...

Please sign up or login with your details

Forgot password? Click here to reset