On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

01/29/2023
by   Shijun Zhang, et al.
0

This paper studies the expressive power of deep neural networks from the perspective of function compositions. We show that repeated compositions of a single fixed-size ReLU network can produce super expressive power. In particular, we prove by construction that ℒ_2∘g^∘ r∘ℒ_1 can approximate 1-Lipschitz continuous functions on [0,1]^d with an error 𝒪(r^-1/d), where g is realized by a fixed-size ReLU network, ℒ_1 and ℒ_2 are two affine linear maps matching the dimensions, and g^∘ r means the r-times composition of g. Furthermore, we extend such a result to generic continuous functions on [0,1]^d with the approximation error characterized by the modulus of continuity. Our results reveal that a continuous-depth network generated via a dynamical system has good approximation power even if its dynamics function is time-independent and realized by a fixed-size ReLU network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset