1 Introduction
Deep neural networks, in particular generative adversarial networks by [Goodfellow et al., 2014] have been recently used to produce generative models for real world data that can capture very complex structures. This is especially true for natural images (see for instance [Nguyen et al., 2016]). Those generative priors have been successfully used to efficiently solve classical inverse problems in signal processing, like super resolution ([Johnson et al., 2016]) and compressed sensing ([Bora et al., 2017]). The latter numerically demonstrates that the generative prior can be exploited to solve the compressed sensing problem with ten times fewer measurements than the classic compressed sensing theory requires. Followup work by [Hand and Voroninski, 2017]
recently explained the success of local methods (namely empirical risk minimization) in the compressed sensing task by assuming a generative model of a multilayer neural network with random weights and ReLU activation functions.
The aim of this paper is to propose a theoretical framework that will allow us to analyze neural networks in the context of another classical inverse problem in signal processing: signal denoising. It has been experimentally established that deep neural networks can be used for image inpainting and denoising
[Xie et al., 2012]. We are interested in denoising in the highnoise regime, in which modern methods that do not rely on machine learning appear less capable. In this work we propose a simple model for the generative model where linear maps are composed with nonlinear activation functions, and we study what mathematical properties of the activation function will allow signal denoising with local methods. We assume our generative model can be expressed as the composition of simple neural network layers we call SUNLayer and we use tools from harmonic analysis to understand what are the
good properties for activation functions for the denoising task. We perform numerical experiments to complement the theory.1.1 Main contributions
The main contributions of this paper can be summarized in two points.

We introduce SUNLayer, a simple model for spherical uniform neural network layers (Section 2).

We prove performance guarantees for denoising with a generative network under the SUNLayer model. In particular, given noise with SUNLayer for some activation function, we show that all critical points of the map are close to provided the activation function is well behaved and the noise is appropriately small.(Section 4).
We believe the theoretical framework we introduce in this paper could be useful to provide mathematical intuition about neural networks in a more general context. See Section 6 for a more indepth discussion.
2 SUNLayer: a neural network model
Let be an input signal, we consider the linear map where the inner product in between and . Let be an activation function. We define one layer of the SUNLayer neural network to be
(1)  
Note that if instead of the linear map we had considered, as one usually does in neural networks, a matrix , then the analogous to is essentially that can be seen as a function defined in the rows of as as
. The SUNLayer model is heuristically generalizing the linear step to a continuum of possible rows.
We are interested in the case where where is a finite dimensional subspace of (and therefore locally compact). The finite dimensionality will allow us to compose several layers of the SUNLayer model. For all , we have that with . A very simple observation (see proof of Lemma 1) shows that for all where is a constant that depends on the activation function and on the dimension
of the domain. Therefore the normalization step (which a priori may have resembled practice standards like batch normalization (
[Ioffe and Szegedy, 2015])) amounts to simple rescaling, and furthermore, we even have when is scaled appropriately (see Lemma 3).We then conclude that is well defined as long that is finite dimensional. In Section 4 we observe that a necessary and sufficient condition for to be finite dimensional is that is a polynomial.
2.1 Denoising
Let us assume we have a generative model that given a parameter produces , an element of a target space (for instance an image)^{3}^{3}3The generative model could have been produced for instance with a generative adversarial network (GAN) trained with a large set of images or more generally structured dataset (that comes from an unknown latent distribution). The GAN consists of two neural networks, one known as the generator, which aims to construct new data plausible to be coming from the latent distribution of the training set, and the other is the discriminator which aims to distinguish between instances from the true dataset and the candidates produced by the generator. Both networks get trained against each other.
After training the generator produces a neural network with several layers. We assume the parameter is space is normalized, so the generator finds a generative model where . For all we have that is an element in the target space (for instance, an image) and
is the vector of parameters that generates it.
. The question we aim to answer is when is it possible to denoise an element to the closest element in the image of by using local methods like gradient descent. Figure 1 shows an example of the phenomenon we aim to explain.We assume our generative model is the composition of layers from the SUNLayer model defined in (1). We solve the denoising problem one layer at a time. Fix . Given for some and noise , then denoising for one SUNLayer corresponds with the least squares problem
(2) 
There exists at least one minimizer for (2) due to compactness.
3 Preliminaries: spherical harmonics
To analyze denoising under the SUNLayer model, we leverage ideas from spherical harmonics. In this section we summarize some classical results about spherical harmonics that can be found on Chapter 2 of [Morimoto, 1998], focusing on theorems and definitions we use in this paper. We refer the reader to [Morimoto, 1998] for a comprehensive review.
Let the space of homogeneous polynomials of degree in variables (we could have also considered real or complex coefficients but real is enough for the scope of this paper).
Definition 1 (Spherical harmonics).
The Laplacian is the differential operator defined as
and the space of spherical harmonics is defined as:
(3) 
In other words, is the restriction of the polynomials with Laplacian 0 to .
Propositon 1.
is a finite dimensional space and
(4) 
In the sequel, we let denote the dimension of .
Definition 2.
For fixed and let an orthonormal basis of . Define the bilinear form
A simple computation shows that is independent of the choice of the orthonormal basis. The bilinear forms will be very useful in the analysis of the SUNLayer model. Some of their relevant properties are summarized in the following lemma.
Propositon 2.
The following statements hold.

Reproducing property: for all .

Zonal property: there exists so that . In particular only depends on .

The function is the Gegenbauer polynomial of degree and dimension . The set is an orthogonal basis of polynomials over with respect to the measure
(5) (here is the standard Borel measure in ). Note that this is not a standard normalization for the Gegenbauer polynomials but we use it to simplify the results of this paper. In fact Chapter 2 of [Morimoto, 1998] considers the Legendre polynomials to be (the term is the dimensional volume of the sphere and it does not show up in Morimoto’s analysis since he uses the normalized measure in the spheres). In Chapter 5 Morimoto considers the Gegenbauer polynomials as a generalization of the Legendre polynomials where can be any real number, with a different normalization.

The discussion in pages 26–27 of [Morimoto, 1998] shows that . This together with the facts
and allow us to identify the correct normalization for the Gegenbauer polynomials.

Using that and Theorems 2.29 and 2.34 of [Morimoto, 1998] one obtains the following identities:
(6) (7) 
Using (5.1) and (5.3) of [Morimoto, 1998] (pages 97–98) one can express a relationship between and its derivative , namely
(8) 
Let a function, then one can decompose in the spherical harmonics as where . Theorem 2.45 of [Morimoto, 1998] in particular shows that for all one has
(9) where is the spherical Laplacian. In particular, if there exists an axis under which is rotationally invariant (i.e. for some fixed and some ) then if
(10) (see for instance (2.9)).
Note that for all , thus . The reproducing property says that for all
(11) 
Observe that for all there exists such that . Then which implies that
4 Analysis
Given an activation function , then since form an orthogonal basis of polynomials over with respect to some measure, we can decompose as
for some . Then
(12) 
In other words one layer of the SUNLayer neural network model (1) can be expressed as
Note that if is a polynomial of degree , then which is finite dimensional. Reciprocally, finite dimensional subspaces of are included in for some finite . This observation, combined with the remark from Section 2 suggest that polynomial activation functions are a useful model for studying the composition of multiple layers.
Lemma 1.
For all we have
Proof.
Note that for all rotations we have
and so
Therefore is constant for all , which implies the lemma since
∎
Given , according to Lemma 1 and equation (12) we need to find that maximizes
(13)  
Note that the second equality is a consequence of the reproducing property (11). The function will be particularly useful in our analysis.
Definition 3.
Let be an activation function, with Gegenbauer decomposition . Then we define as
Lemma 2.
If is and (convergence in ) then the functions and are welldefined (and the convergence is also pointwise and absolute). Furthermore, if is we also have that for all .
Proof.
See Appendix 7. ∎
Lemma 3.
If then
Proof.
4.1 Noiseless case
The following Theorem provides a sufficient condition that makes recovery possible in the noiseless case.
Theorem 1.
Suppose for all . Then for each , the only critical points of
are , with being the unique local minimizer.
4.2 Denoising
The following Theorem is the main result of this paper.
Theorem 2.
Let and . We decompose as follows:
Let and let . Then

Every critical point of satisfies that

Define , then .
Proof.
of Theorem 2 (a) According to Lemma 1 we need to solve
The reproducing property implies
Therefore the denoising objective is
(14) 
For critical point of (14) Lagrange multipliers give us
and implies
By hypothesis we have then which implies , therefore
and
therefore
∎
The key parameter in Theorem 2 depends on both the noise and the activation function . In order to understand the behavior of in terms of the noise , and prove Theorem 2 (b), we choose ( so that forms a tight frame. To this end it suffices for to form a spherical design for .
Definition 4 (Spherical design).
A spherical design is a sequence of points such that for every polynomial of degree at most we have
Definition 5 (Tight frame).
Let be vector space with an inner product. A tight frame is a sequence such that there exists a constant so that for all
Lemma 4.
If form a spherical design with then is a tight frame for with constant .
Proof.
Let be an orthonormal basis for . Consider if and 0 otherwise. It suffices to show
Observe that if then is a polynomial of degree . Then using the design property we get
which proves the theorem. ∎
Proof of Theorem 2 (b).
We choose so that is a tight frame for with constant . We write . One can uniquely decompose with and we have . In fact can be chosen so that . Following the notation in the proof of Theorem 2 (a) we have:
and
Let such that . Let
then for all we have
Since we bound
obtaining the bound
Using Theorem 2 (a) we conclude that denoising is possible provided that
Note that the left hand side depends on the activation function and the noise whereas the right hand side depends only on . Using the frame properties and CauchySchwarz inequality one can write
Comments
There are no comments yet.