Graph-based Discriminators: Sample Complexity and Expressiveness

by   Roi Livni, et al.

A basic question in learning theory is to identify if two distributions are identical when we have access only to examples sampled from the distributions. This basic task is considered, for example, in the context of Generative Adversarial Networks (GANs), where a discriminator is trained to distinguish between a real-life distribution and a synthetic distribution. we use a hypothesis class H and claim that the two distributions are distinct if for some h∈ H the expected value on the two distributions is (significantly) different. Our starting point is the following fundamental problem: "is having the hypothesis dependent on more than a single random example beneficial". To address this challenge we define k-ary based discriminators, which have a family of Boolean k-ary functions G. Each function g∈G naturally defines a hyper-graph, indicating whether a given hyper-edge exists. A function g∈G distinguishes between two distributions, if the expected value of g, on a k-tuple of i.i.d examples, on the two distributions is (significantly) different. We study the expressiveness of families of k-ary functions, compared to the classical hypothesis class H, which is k=1. We show a separation in expressiveness of k+1-ary versus k-ary functions. This demonstrate the great benefit of having k≥ 2 as distinguishers. For k≥ 2 we introduce a notion similar to the VC-dimension, and show that it controls the sample complexity. We proceed and provide upper and lower bounds as a function of our extended notion of VC-dimension.


Tight Sample Complexity of Large-Margin Learning

We obtain a tight distribution-specific characterization of the sample c...

Testing Identity of Multidimensional Histograms

We investigate the problem of identity testing for multidimensional hist...

Intrinsic Dimension Estimation

It has long been thought that high-dimensional data encountered in many ...

Impossibility of Characterizing Distribution Learning – a simple solution to a long-standing problem

We consider the long-standing question of finding a parameter of a class...

Private Hypothesis Selection

We provide a differentially private algorithm for hypothesis selection. ...

Statistical Windows in Testing for the Initial Distribution of a Reversible Markov Chain

We study the problem of hypothesis testing between two discrete distribu...

Provable limitations of deep learning

As the success of deep learning reaches more grounds, one would like to ...

Please sign up or login with your details

Forgot password? Click here to reset