Minimax Estimation of Neural Net Distance

11/02/2018 ∙ by Kaiyi Ji, et al. ∙ The Ohio State University 0

An important class of distance metrics proposed for training generative adversarial networks (GANs) is the integral probability metric (IPM), in which the neural net distance captures the practical GAN training via two neural networks. This paper investigates the minimax estimation problem of the neural net distance based on samples drawn from the distributions. We develop the first known minimax lower bound on the estimation error of the neural net distance, and an upper bound tighter than an existing bound on the estimator error for the empirical neural net distance. Our lower and upper bounds match not only in the order of the sample size but also in terms of the norm of the parameter matrices of neural networks, which justifies the empirical neural net distance as a good approximation of the true neural net distance for training GANs in practice.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative adversarial networks (GANs), first introduced by [9], have become an important technique for learning generative models from complicated real-life data. Training GANs is performed via a min-max optimization with the maximum and minimum respectively taken over a class of discriminators and a class of generators, where both discriminator and generators are modeled by neural networks. Given that the discriminator class is sufficiently large, [9] interpreted the GAN training as finding a generator such that the generated distribution is as close as possible to the target true distribution , measured by the Jensen-Shannon distance , as shown below:

(1)

Inspired by such an idea, a large body of GAN models were then proposed based on various distance metrics between a pair of distributions, in order to improve the training stability and performance, e.g., [2, 3, 13, 15]. Among them, the integral probability metric (IPM) [19] arises as an important class of distance metrics for training GANs, which takes the following form

(2)

In particular, different choices of the function class in (2) result in different distance metrics. For example, if represents a set of all -Lipschitz functions, then corresponds to the Wasserstein- distance, which is used in Wasserstein-GAN (WGAN) [2]. If represents a unit ball in a reproducing kernel Hilbert space (RKHS), then corresponds to the maximum mean discrepancy (MMD) distance, which is used in MMD-GAN [7, 13].

Practical GAN training naturally motivates to take in (2) as a set of neural networks, which results in the so-called neural net distance introduced and studied in [3, 28]. For computational feasibility, in practice is typically adopted as an approximation (i.e., an estimator) of the true neural net distance for the practical GAN training, where and are the empirical distributions corresponding to and , respectively, based on samples drawn from and samples drawn from . Thus, one important question one can ask here is how well approximates . If they are close, then training GANs to small also implies small , i.e., the generated distribution is guaranteed to be close to the true distribution .

To answer this question, [3] derived an upper bound on the quantity , and showed that converges to at a rate of . However, the following two important questions are still left open: (a) Whether the rate of convergence is optimal? We certainly want to be assured that the empirical objective used in practice does not fall short at the first place. (b) The dependence of the upper bound on neural networks in [3] is characterized by the total number of parameters of neural networks, which can be quite loose by considering recent work, e.g., [20, 27]. Thus, the goal of this paper is to address the above issue (a) by developing a lower bound on the minimax estimation error of (see Section 2.2 for the precise formulation) and to address issue (b) by developing a tighter upper bound than [3].

In fact, the above problem can be viewed as a distance estimation problem, i.e., estimating the neural net distance based on samples i.i.d. drawn from and , respectively. The empirical distance serves as its plug-in estimator (i.e., substituting the true distributions by their empirical versions). We are interested in exploring the optimality of the convergence of such a plug-in estimator not only in terms of the size of samples but also the parameters of neural networks. We further note that the neural net distance can be used in a variety of other applications such as the support measure machine [18]

and the anomaly detection

[29], and hence the performance guarantee we establish here can be of interest in those domains.

1.1 Our Contribution

In this paper, we investigate the minimax estimation of the neural net distance , where the major challenge in analysis lies in dealing with complicated neural network functions. This paper establishes a tighter upper bound on the convergence rate of the empirical estimator than the existing one in [3], and develop a lower bound that matches our upper bound not only in the order of the sample size but also in terms of the norm of the parameter matrices of neural networks. Our specific contributions are summarized as follows:

  • In Section 3.1, we provide the first known lower bound on the minimax estimation error of based on finite samples, which takes the form as where the constant depends only on the parameters of neural networks. Such a lower bound further specializes to

    for ReLU networks, where

    is a constant, is the depth of neural networks and can be either the Frobenius norm or norm constraint of the parameter matrix in layer . Our proof exploits the Le Cam’s method with the technical development of a lower bound on the difference between two neural networks.

  • In Section 3.2, we develop an upper bound on the estimation error of by , which takes the form as , where the constant depends only on the parameters of neural networks. Such an upper bound further specializes to for ReLU networks, where is a constant, is the dimension of the support of and , and can be replaced by or depending on the distribution class and the norm of the weight matrices. Our proof includes the following two major technical developments presented in Section 3.4.

    • A new concentration inequality: In order to develop an upper bound for the unbounded-support sub-Gaussican class, standard McDiarmid inequality under bounded difference condition is not applicable. We thus first generalize a McDiarmid inequality [11] for unbounded functions of scalar sub-Gaussian variables to that of sub-Gaussian vectors, which can be of independent interest for other applications. Such a development requires substantial machineries. We then apply such a concentration inequality to upper-bounding the estimation error of the neural net distance in terms of Rademacher complexity.

    • Upper bound on Rademacher complexity: Though existing Rademacher complexity bounds  [8, 22] of neural networks can be used for input data with bounded support, direct applications of those bounds to the unbounded sub-Gaussian input data yield order-level loose bounds. Thus, we develop a tighter bound on the Rademacher complexity that exploits the sub-Gaussianity of the input variables. Such a bound is also tighter than the existing same type by [23]. The details of the comparison are provided after Theorem 7.

  • In Section 3.3, comparison of the lower and upper bounds indicates that the empirical neural net distance (i.e., the plug-in estimator) achieves the optimal minimax estimation rate in terms of . Furthermore, for ReLU networks, the two bounds also match in terms of , indicating that both and are key quantities that capture the estimation accuracy. Such a result is consistent with those made in [20] for the generalization error of training deep neural networks. We note that there is still a gap between the bounds, which requires future efforts to address.

1.2 Related Work

Estimation of IPMs. [25] studied the empirical estimation of several IPMs including the Wasserstein distance, MMD and Dudley metric, and established the convergence rate for their empirical estimators. A recent paper [26] established that the empirical estimator of MMD achieves the minimax optimal convergence rate. [3] introduced the neural net distance that also belongs to the IPM class, and established the convergence rate of its empirical estimator. This paper establishes a tighter upper bound for such a distance metric, as well as a lower bound that matches our upper bound in the order of sample sizes and the norm of the parameter matrices.

Generalization error of GANs. In this paper, we focus on the minimax estimation error of the neural net distance, and hence the quantity is of our interest, on which our bound is tighter than the earlier study in [3]. Such a quantity relates but is different from the following generalization error recently studied in [14, 28] for training GANs. [28] studied the generalization error , where was the minimizer of and was taken as a class of neural networks. [14] studied the same type of generalization error but took as a Sobolev space, and characterized how the smoothness of Sobolev space helps the GAN training.

Rademacher complexity of neural networks. Part of our analysis of the minimax estimation error of the neural net distance requires to upper-bound the average Rademacher complexity of neural networks. Although various bounds on the Rademacher complexity of neural networks, e.g., [1, 4, 8, 21, 22], can be used for distributions with bounded support, direct application of the best known existing bound for sub-Gaussian variables turns out to be order-level looser than the bound we establish here. [6, 23] studied the average Rademacher complexity of one-hidden layer neural networks over Gaussian variables. Specialization of our bound to the setting of [23] improves its bound, and to the setting of [6] equals its bound.

1.3 Notations

We use the bold-faced small and capital letters to denote vectors and matrices, respectively. Given a vector , denotes the norm, and denotes the norm, where denotes the coordinate of . For a matrix , we use to denote its Frobenius norm, to denote the maximal norm of the row vectors of , and to denote its spectral norm. For a real distribution , we denote as its empirical distribution, which takes probability on each of the samples i.i.d. drawn from .

2 Preliminaries and Problem Formulations

In this section, we first introduce the neural net distance and the specifications of the corresponding neural networks. We then introduce the minimax estimation problem that we study in this paper.

2.1 Neural Net Distance

The neural net distance between two distributions and introduced in [3] is defined as

(3)

where is a class of neural networks. In this paper, given the domain , we let be the following set of depth- neural networks of the form:

(4)

where are parameter matrices, is a parameter vector (so that the output of the neural network is a scalar), and each

denotes the entry-wise activation function of layer

for , i.e., for an input , .

Throughout this paper, we adopt the following two assumptions on the activation functions in (4).

Assumption 1.

All activation functions for satisfy

  • is continuous and non-decreasing and .

  • is -Lipschitz, where .

Assumption 2.

For all activation functions , there exist positive constants and such that for any , .

Note that Assumptions 1 and 2 hold for a variety of commonly used activation functions including ReLU, sigmoid, softPlus and tanh. In particular, in Assumption 2, the existence of the constants and are more important than the particular values they take, which affect only the constant terms in our bounds presented later. For example, Assumption 2 holds for ReLU for any and , and holds for sigmoid for any and .

As shown in [2], the practical training of GANs is conducted over neural networks with parameters lying in a compact space. Thus, we consider the following two compact parameter sets as taken in [5, 8, 22, 24],

(5)

2.2 Minimax Estimation Problem

In this paper, we study the minimax estimation problem defined as follows. Supposed is a subset of Borel probability measures of interest. Let denote any estimator of the neural net distance constructed by using the samples and respectively generated i.i.d. by . Our goal is to first find a lower bound on the estimation error such that

(6)

where is the probability measure with respect to the random samples and . We then focus on the empirical estimator and are interested in finding an upper bound on the estimation error such that for any arbitrarily small ,

(7)

Clearly such an upper bound also holds if the left hand side of (7) is defined in the same minimax sense as in (6).

It can be seen that the minimax estimation problem is defined with respect to the set of distributions that and

belong to. In this paper, we consider the set of all sub-Gaussian distributions over

. We further divide the set into the two subsets and analyze them separately, for which the technical tools are very different. The first set contains all sub-Gaussian distributions with unbounded support, and bounded

mean and variance. Specifically, we assume that there exist

and such that for any probability measure and any vector ,

(8)

The second class of distributions contains all sub-Gaussian distributions with bounded support

(note that this set in fact includes all distributions with bounded support). These two mutually exclusive classes cover most probability distributions in practice.

3 Main Results

3.1 Minimax Lower Bound

We first develop the following minimax lower bound for the sub-Gaussian distribution class with unbounded support.

Theorem 1 (unbounded-support sub-Gaussian class ).

Let be the set of neural networks defined by (4). For the parameter set in (2.1), if , then

(9)

where

(10)

and

is the cumulative distribution function (CDF) of the standard Gaussian distribution and the constants

are given by the following recursion

(11)

The same result holds for the parameter set by replacing in (10) with .

Theorem 1 implies that cannot be estimated at a rate faster than by any estimator over the class . The proof of Theorem 1 is based on the Le Cam’s method. Such a technique was also used in [26] to derive the minimax lower bound for estimating MMD. However, our technical development is quite different from that in [26]. In specific, one major step of the Le Cam’s method is to lower-bound the difference arising due to two hypothesis distributions. In the MMD case in [26], MMD can be expressed in a closed form for the chosen distributions. Hence, the lower bound in Le Cam’s method can be derived based on such a closed form of MMD. As a comparison, the neural net distance does not have a closed-form expression for the chosen distributions. As a result, our derivation involves lower-bounding the difference of the expectations of the neural network function with respect to two corresponding distributions. Such developments require substantial machineries to deal with the complicated multi-layer structure of neural networks. See Appendix A.1 for more details.

For general neural networks, takes a complicated form as in (10). We next specialize to ReLU networks to illustrate how this constant depends on the neural network parameters.

Corollary 1.

Under the setting of Theorem 1, suppose each activation function is ReLU, i.e., . For the parameter set and all , we have

The same result holds for the parameter set by replacing with .

Next, we provide the minimax lower bound for the distribution class with bounded support. The proof (see Appendix B) is also based on the Le Cam’s method, but with the construction of distributions having the bounded support sets, which are different from those for Theorem 1.

Theorem 2 (bounded-support class ).

Let be the set of neural networks defined by (4). For the parameter set , we have

(12)

where

(13)

where all constants in the second term of the right side of (13) have negative signs. The same result holds for the parameter set by replacing in (13) with .

Corollary 2.

Under the setting of Theorem 2, suppose that each activation function is ReLU. For the parameter set , we have,

The same result holds for the parameter set by replacing with .

3.2 Rademacher Complexity-based Upper Bound

In this subsection, we provide an upper bound on , which serves as an upper bound on the minimax estimation error. Our main technical development lies in deriving the bound for the unbounded-support sub-Gaussian class , which requires a number of new technical developments. We discuss its proof in Section 3.4.

Theorem 3 (unbounded-support sub-Gaussian class ).

Let be the set of neural networks defined by (4), and suppose that two distributions and are their empirical measures. For a constant satisfying , we have

(I) If the parameter set is and each activation function satisfies for all (e.g., ReLU or leaky ReLU), then with probability at least over the randomness of and ,

(II) If the parameter set is and each activation function satisfies (e.g., ReLU, leaky ReLU or tanh), then with probability at least over the randomness of and ,

Corollary 3.

Theorem 3 is directly applicable to ReLU networks with for .

We next present an upper bound on for the bounded-support class . In such a case, each data sample satisfies , and hence we apply the standard McDiarmid inequality [16] and the Rademacher complexity bounds in [8] to upper-bound . The detailed proof can be found in Appendix D.

Theorem 4 (bounded-support class ).

Let be the set of neural networks defined by (4), and suppose that two distributions . Then, we have

(I) If the parameter set is and each activation function satisfies for all , then with probability at least over the randomness of and ,

(II) If the parameter set is and each activation function satisfies , then with probability at least over the randomness of and ,

Corollary 4.

Theorem 4 is applicable for ReLU networks with for .

As a comparison, the upper bound derived in [3] is linear with the total number of the parameters of neural networks, whereas our bound in Theorem 4 scales only with the square root of depth (and other terms in Theorem 4 matches the lower bound in Corollary 2), which is much smaller.

3.3 Optimality of Minimax Estimation and Discussions

We compare the lower and upper bounds and make the following remarks on the optimality of minimax estimation of the neural net distance.

  • For the unbounded-support sub-Gaussian class , comparison of Theorems 1 and 3 indicates that the empirical estimator achieves the optimal minimax estimation rate as the sample size goes to infinity.

  • Furthermore, for ReLU networks, comparison of Corollaries 1 and 3 implies that the lower and upper bounds match further in terms of , where can be or , indicating that both and capture the estimation accuracy. Such an observation is consistent with those made in [20] for the generalization error of training deep neural networks. Moreover, the mean norm and the variance parameter of the distributions also determine the estimation accuracy due to the match of the bounds in .

  • The same observations hold for the bounded-support class by comparing Theorems 2 and 4 as well as comparing Corollaries 2 and 4.

We further note that for ReLU networks, for both the unbounded-support sub-Gaussian class and the bounded-support class , there is a gap of (or , depending on the distribution class and the norm of the weight matrices). To close the gap, the size-independent bound on Rademacher complexity in [8] appears appealing. However, such a bound is applicable only to the bounded-support class , and helps to remove the dependence on but at the cost of sacrificing the rate (i.e., from to ). Consequently, such an upper bound matches the lower bound in Corollary 2 for ReLU networks over the network parameters, but not in terms of the sample size, and is interesting only in the regime when . It is thus still an open problem and calling for future efforts to close the gap of for estimating the neural net distance.

3.4 Proof Outline for Theorem 3

In this subsection, we briefly explain the three major steps to prove Theorem 3, because some of these intermediate steps correspond to theorems that can be of independent interest. The detailed proof can be found in Appendix C.

Step 1: A new McDiarmid’s type of inequality. To establish an upper bound on , the standard McDiarmid’s inequality [16] that requires the bounded difference condition is not applicable here, because the input data has unbounded support so that the functions in can be unbounded, e.g., ReLU neural networks. Such a challenge can be addressed by a generalized McDiarmid’s inequality for scalar sub-Gaussian variables established in [11]. However, the input data are vectors in our setting. Thus, we further generalize the result in  [11] and establish the following new McDiarmid’s type of concentration inequality for unbounded sub-Gaussian random vectors and Lipschitz (possibly unbounded) functions. Such development turns out to be nontrivial, which requires further machineries and tail bound inequalities (see detailed proof in Appendix C.1).

Theorem 5.

Let and

be two collections of random variables, where

are two unbounded-support sub-Gaussian distributions over . Suppose that is a function of , which satisfies for any ,

(14)

Then, for all ,

(15)

Step 2: Upper bound based on Rademacher complexity. By applying Theorem 5, we derive an upper bound on in terms of the average Rademacher complexity that we define below.

Definition 1.

The average Rademacher complexity corresponding to the distribution with samples is defined as , where are generated i.i.d. by and are independent random variables chosen from uniformly.

Then, we have the following result with the proof provided in Appendix C.2. Recall that is the Lipchitz constant of the activation function .

Theorem 6.

Let be the set of neural networks defined by (4). For the parameter set defined in (2.1), suppose that are two sub-Gaussian distributions satisfying (8) and are the empirical measures of . If , then with probability at least over the randomness of and ,

(16)

The same result holds for the parameter set by replacing in (6) with .

Step 3: Average Rademacher complexity bound for unbounded sub-Gaussian variables. We derive an upper bound on the Rademacher complexity . In particular, as we explain next, our upper bound is tighter than directly applying the existing bounds in [8, 22]. To see this, [8, 22] provided upper bounds on the data-dependent Rademacher complexity of neural networks defined by . For the parameter set , [22] showed that was bounded by , and [8] further improved this bound to Directly applying this result for unbounded sub-Gaussian inputs yields

(17)

We next show that by exploiting the sub-Gaussianity of the input data, we provide an improved bound on the average Rademacher complexity. The detailed proof can be found in Appendix C.3.

Theorem 7.

Let be the set of neural networks defined by (4), and let be i.i.d. random samples generated by an unbounded-supported sub-Gaussian distribution . Then,

(I) If the parameter set is and activation functions satisfy for all , then

(18)

(II) If the parameter set is and each activation function satisfies , then

(19)

Theorem 7 indicates that for the parameter set , our upper bound in (18) replaces the order dependence in (17) to , and hence our proof has the order-level improvement than directly using the existing bounds. The same observation can be made for the parameter set . Such improvement is because our proof takes advantage of the sub-Gaussianity of the inputs whereas the bounds in [8, 22] must hold for any data input (and hence the worst-case data input).

We also note that [23] provided an upper bound on the Rademacher complexity for one-hidden-layer neural networks for Gaussian inputs. Casting Lemma 3.2 in [23] to our setting of (18) yields

(20)

where

is the number of neurons in the hidden layer. Compared with (

20), our bound has an order-level improvement.

Summary. Therefore, Theorem 3 follows by combining Theorems 6 and 7 and using the fact that .

4 Conclusion

In this paper, we developed both the lower and upper bounds for the minimax estimation of the neural net distance based on finite samples. Our results established the minimax optimality of the empirical estimator in terms of not only the sample size but also the norm of the parameter matrices of neural networks, which justifies its usage for training GANs.

Acknowledgments

The work was supported in part by U.S. National Science Foundation under the grant CCF-1801855.

References

  • [1] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 2009.
  • [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. In

    Proc. International Conference on Machine Learning (ICML)

    , pages 214–223, 2017.
  • [3] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In Proc. International Conference on Machine Learning (ICML), 2017.
  • [4] P. L. Bartlett, D. J. Foster, and M. J. Telgarsky. Spectrally-normalized margin bounds for neural networks. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 6241–6250, 2017.
  • [5] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, Nov 2002.
  • [6] S. S. Du and J. D. Lee. On the power of over-parametrization in neural networks with quadratic activation. arXiv preprint arXiv:1803.01206, 2018.
  • [7] G. Dziugaite, D. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In

    Proc. of the 31st Conference on Uncertainty in Artificial Intelligence (UAI)

    , pages 258–267, 2015.
  • [8] N. Golowich, A. Rakhlin, and O. Shamir. Size-independent sample complexity of neural networks. In Proc. Conference on Learning Theory (COLT), 2018. To appear.
  • [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2014.
  • [10] D. Hsu, S. Kakade, and T. Zhang. A tail inequality for quadratic forms of subGaussian random vectors. Electronic Communications in Probability, 17, 2012.
  • [11] A. Kontorovich. Concentration in unbounded metric spaces and algorithmic stability. In Proc. International Conference on Machine Learning (ICML), pages 28–36, 2014.
  • [12] M. Ledoux and M. Talagrand. Probability in Banach Spaces. Springer, 1991.
  • [13] Y. Li, K. Swersky, and R. Zemel.

    Generative moment matching networks.

    In Proc. International Conference on Machine Learning (ICML), pages 1718–1727, 2015.
  • [14] T. Liang. How well can generative adversarial networks (GAN) learn densities: a nonparametric view. arXiv preprint arXiv:1712.08244, 2017.
  • [15] S. Liu, O. Bousquet, and K. Chaudhuri. Approximation and convergence properties of generative adversarial learning. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 5551–5559, 2017.
  • [16] C. McDiarmid. On the method of bounded differences. Surveys in Combinatorics, 141(1):148–188, 1989.
  • [17] S. J. Montgomery-Smith. The distribution of Rademacher sums. Proceedings of the American Mathematical Society, 109(2):517–522, 1990.
  • [18] K. Muandet, K. Fukumizu, F. Dinuzzo, and B. Schölkopf. Learning from distributions via support measure machines. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 10–18, 2012.
  • [19] A. Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429–443, 1997.
  • [20] B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro.

    Exploring generalization in deep learning.

    In Proc. Advances in Neural Information Processing Systems (NIPS), pages 5949–5958, 2017.
  • [21] B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro. A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. In Proc. International Conference on Learning Representations (ICLR), 2018.
  • [22] B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In Proc. Conference on Learning Theory (COLT), pages 1376–1401, 2015.
  • [23] S. Oymak. Learning compact neural networks with regularization. arXiv preprint, arXiv:1802.01223, 2018.
  • [24] T. Salimans and D. P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 901–909, 2016.
  • [25] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. Schölkopf, and G. R. Lanckriet. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 6:1550–1599, 2012.
  • [26] I. O. Tolstikhin, B. K. Sriperumbudur, and B. Schölkopf. Minimax estimation of maximum mean discrepancy with radial kernels. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 1930–1938, 2016.
  • [27] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. In Proc. International Conference on Learning Representations (ICLR), 2017.
  • [28] P. Zhang, Q. Liu, D. Zhou, T. Xu, and X. He. On the discrimination-generalization tradeoff in GANs. In Proc. International Conference on Learning Representations (ICLR), 2018.
  • [29] S. Zou, Y. Liang, and H. V. Poor. Nonparametric detection of geometric structures over networks. IEEE Transactions on Signal Processing, 65(19):5034–5046, 2015.

Appendix A Proofs of Theorem 1 and Corollary 1

a.1 Proof of Theorem 1

The proof is based on a variant of the Le Cam’s method in Theorem 3 in [26]. Our major technical developments lie in properly choosing two hypothesis distributions as well as a neural network, and then lower-bounding the difference of the expectation of the chosen neural network function between the two distributions. We first present the Le Cam’s method (Theorem 3 in [26]).

Lemma 1 (Le Cam’s method).

Let be a functional defined on a space and be a set of probability measures. The data samples are distributed according to an unknown element . Assume that there exist such that and KL Then,

(21)

where the Kullback-Leibler divergence KL