Generative adversarial networks (GANs) are an example of generative models. Specifically, the model takes a training set, consisting of samples drawn from a probability distribution, and learns how to represent an estimate of that distribution. GANs focus primarily on sample generation, but it is also possible to design GANs that can estimate the probability distribution explicitly.
The subject has been recently studied, especially because it has many practical applications on variuos topics. For instance, they can used for medical purposes, i.e., to improve the diagnostic performance for the low-dose computed tomography method , of for polishing images taken in unfavourable weather conditions (as rain or snow) . Other applications range from speech and language recognition, to playing chess and vision computing .
The idea behind GANs is to train the generative model via an adversarial process, in which also the opponent is simultaneously trained. Therefore, there are two neural network classes: a generative model that captures the data distribution, and a discriminative model that estimates the probability that a sample came from the training data rather than from the generator. The generative model can be thought of as a team of counterfeiters, trying to produce fake currency, while the discriminative model, i.e., the police, tries to detect the counterfeit money. The competition drives both teams to improve their methods until the counterfeit currency is indistinguishable from the original. To succeed in this game, the counterfeiter must learn to make money that are indistinguishable from original currency, and the generator network must learn to create samples that are drawn from the same distribution as the training data .
Since each agent payoff depends on the variables of the other agent, this problem can be described as a game. Therefore, these networks are called adversarial. However, GANs can be also thought as a game with cooperative players since they share information with each other 
. Since there are only the generator and the discriminator, the problem is an instance of a two-player game. Moreover, depending on the cost functions, it can also be considered as a zero-sum game. From a mathematical perspective, the class of games that suits the GAN problem is that of stochastic Nash equilibrium problems (SNEPs) where each agent aims at minimizing its expected value cost function which is approximated via a number of samples of the random variable.
Given their connection with robust optimization and game theory, GANs have received theoretical attention as well, both for modelling as Nash equilibrium problems[6, 7] and for designing algorithms that improve the training process [8, 7].
From a game theoretic point perspective, an elegant approach to compute a SNE is to cast the problem as a stochastic variational inequality (SVI)  and to use an iterative algorithm to find a solution. The two most used methods for SVIs studied in the literature for GANs  are the gradient method , known in monotone operator theory as forward–backward (FB) algorithm , and the extragradient (EG) method [12, 13]. The iterates of the FB algorithm involve an evaluation of the pseudogradient and a projection step. These iterates are known to converge if the pseudogradient mapping is cocoercive or strongly monotone [14, 15]. However, such technical assumptions are quite strong if we consider that in GANs the mapping is rarely monotone. In contrast, the EG algorithm converges for merely monotone operators but taking two projections into the local constraint set per iteration, thus making the algorithm slow and computationally expensive. Other algorithms for VIs that can be applied to GANs can be found in .
In this paper we propose a stochastic relaxed FB (SRFB) algorithm, inspired by , for GANs. A first analysis of the algorithm for stochastic (generalized) NEPs is currently under review . The SRFB requires a single projection and single evaluation of the pseudogradient algorithm per iteration. The advantage of our proposed algorithm is that it is less computationally demanding that the EG algorithm even if it converges under the same assumptions. Indeed, we prove its convergence under mere monotonicity of the pseudogradient mapping when a huge number of samples is available. Alternatively, if only a finite number of samples is accessible, we prove that averaging can be used to converge to a neighbourhood of the solution.
Notation. Let indicate the set of real numbers and let . denotes the standard inner product and represents the associated euclidean norm. Given vectors , For a closed set the mapping denotes the projection onto , i.e., .
Ii Generative Adversarial Networks
The basic idea of generative adversarial networks (GANs) is to set up a game between two players: the generator and the discriminator. The generator creates samples that are intended to come from the same distribution as the training data. The discriminator examines the samples to determine whether they are real or fake. The generator is therefore trained to fool the discriminator. Typically, a deep neural network is used to represent the generator and the discriminator. Accordingly, the two players are denoted by two functions, each of which is differentiable both with respect to its inputs and with respect to its parameters.
The generator is represented by a differentiable function , that is, a neural network class with parameter vector . The (fake) output of the generator is denoted with where the input is a random noise drawn from the model prior distribution, , that the generator uses to create the fake output . The actual strategies of the generator are the parameters that allows to produce the fake output.
The discriminator is a neural network class as well, with parameter vector and a single output that indicates the accuracy of the input . We interpret the output as the probability that the discriminator assigns to an element to be real. Similarly to the generator , the strategies of the discriminator are the parameters .
The problem can be cast as a two player game, or, depending on the cost functions, as a zero sum game. Specifically, in the latter case the mappings and should satisfy the following relation
In the context of zero sum games, the problem can be rewritten as a minmax problem
In words, (3) means that the generator aims at minimizing the distance from the real value while the discriminator wants to maximize it, i.e. to recognize the fake data.
Iii Stochastic Nash equilibrium problems
In this section we formalize the two player game in a more general form that will support our analysis. Specifically, we consider the problem as a general stochastic Nash equilibrium problem since our analysis is independent on the choice of the cost functions.
We consider a set of two agents , that represents the two neural network classes. The local cost function of agent is defined as
for some measurable function where . The cost function of agent depends on the local variable , the decisions of the other player , , and the random variable that express the uncertainty. Such uncertainty arises in practice when it is not possible to have access to the exact mapping, i.e., when only a finite number of estimates are available. represent the mathematical expectation with respect to the distribution of the random variable 111From now on, we use instead of and instead of . in the probability space . We assume that is well defined for all the feasible . For our theoretical analysis, we postulate the following assumptions on the cost function and on the feasible set which are standard in game theory [19, 18].
For each , , the function is convex and continuously differentiable.
For each the set is nonempty, compact and convex.
Given the decision variables of the other agent, each player aims at choosing a strategy , that solves its local optimization problem, i.e.,
A stochastic Nash equilibrium is a collective strategy such that for all
Thus, a SNE is a set of strategies where no agent can decrease its cost function by unilaterally deviating from its decision.
To guarantee that a SNE exists, we make further assumptions on the cost functions [18, Ass. 1].
For each and for each , the function is convex, Lipschitz continuous, and continuously differentiable. The function is measurable and for each and its Lipschitz constant is integrable in .
For seeking a Nash equilibrium, we rewrite the problems as a stochastic variational inequality. To this aim, let us denote the pseudogradient mapping as
Iv Stochastic relaxed forward–backward with averaging
Iteration : Agent receives , , then updates:
We note that the averaging step
where , was first proposed for VIs in , and it can be implemented in an online fashion as
where . Even if they look similar, (11) is different from (9a). Indeed, in Algorithm 1, (9a) is a convex combination of the two previous iterates and , with a fixed parameter , while the averaging in (11) is a weighted cumulative sum over all the decision variables for all with time varying weights . The parameter can be tuned to obtain uniform, geometric or exponential averaging . The relaxation parameter instead should satisfy the following assumption.
In Algorithm 1, .
as in (7) is monotone, i.e. for all .
Next, let us define the stochastic approximation of the pseudogradient  as
uses one or a finite number, called mini-batch, of realizations of the random variable. Given the approximation, we postulate the following assumption which is quite strong yet reasonable in our game theoretic framework . Let us first define the filtration , that is, a family of -algebras such that and for all such that for all .
in (12) is bounded, i.e., there exists such that for ,
For the sake of our analysis, we make an explicit bound on the feasible set.
The local constraint set is such that , for some .
For all we define the stochastic error as
that is, the distance between the approximation and the exact expected value. Then, we postulate that the stochastic error satisfies the following assumption.
The stochastic error in (13) is such that, for all , a.s..
Essentially, Assumption 8
states that the error has zero mean and bounded variance, as usual in SVI[8, 13, 17].
As a measure of the quality of the solution, we define the following error
which is known as gap function and it is equal 0 if and only if is a solution of the (S)VI in (8) [9, Eq. 1.5.2]. Another measure function specific for the zero-sum game and other possible measures can be found in .
We are now ready to state our first result.
See Appendix B.
V Sample average approximation
If a huge number of samples is available or it is possible to compute the exact expected value, one can consider using a different approximation scheme or a deterministic algorithm. We discuss these two situations in this section.
In the SVI framework, using a finite, fixed number of samples is called stochastic approximation (SA). It is widely used in the literature but it often requires conditions on the step sizes to control the stochastic error. Usually, the step size sequence should be diminishing with the results that the iterations slow down considerably. The approach that is instead used to keep a fixed step size is the sample average approximation (SAA) scheme. In this case, an increasing number of samples is taken at each iteration and this helps having a diminishing error.
With the SAA scheme, it is possible to prove convergence to the exact solution without using the averaging step. We show this result in Theorem 2 but first we provide more details on the approximation scheme and state some assumptions. The algorithm that we are proposing is presented in Algorithm 2. The differences with Algorithm 1 are the absence of the averaging step and the approximation .
Iteration : Agent receives for , then updates:
Formally, the approximation that we use is given by
where is the batch size that should be increasing .
The batch size sequence is such that for some .
With a little abuse of notation, let us denote the stochastic error also in this case as
Using the SAA scheme, it is possible to prove that, for some , i.e., the error diminishes as the size of the batch increases. Details on how to obtain this result can be found in .
as in (7) is -Lipschitz continuous for , i.e., for all .
The relaxation parameter should not be too small.
In Algorithm 2, .
Conveniently, with the SAA scheme we can take a constant step size, as long as it is small enough.
The steps size is such that where is the Lipschitz constant of as in Assumption 10.
We can finally state our convergence result.
See Appendix C.
If one is able to compute the exact expected value, the problem is equivalent to the deterministic case. Convergence follows under the same assumptions made for the SAA scheme with the exception of those on the stochastic error.
Vi Numerical simulations
In this section, we present some numerical experiments to validate the analysis. We propose two theoretical comparison between the most used algorithms for GANs . In both the examples, we simulate our SRFB algorithm, the SpFB algorithm , the EG algorithm , the EG algorithm with extrapolation from the past (PastEG)  and Adam, a typical algorithm for GANs .
All the simulations are performed on Matlab R2019b with a 2,3 GHz Intel Core i5 and 8 GB LPDDR3 RAM.
Vi-a Illustrative example
In order to make a comparison, we consider the following zero-sum game which is a problematic example, for instance, for the FB algorithm [8, Prop. 1].
We suppose that the two players aims at solving the minmax problem in (3) with cost function
where and . The matrix is the stochastic part that we approximate with the SAA scheme. is an antidiagonal matrix, i.e., the entry if and only if
, and the entries are sampled from a normal distribution with mean 1 and finite variance. The mapping is monotone andand . The problem is constrained so that and the optimal solution is . The step sizes are taken to be the highest possible.
As one can see from Fig. 0(a), the SFB does not converge. From Fig. 0(c) instead, we note that the SRFB algorithm is the less computationally expensive. Interestingly, the average tends to smooth the convergence to a solution.
Vi-B Classic GAN zero-sum game
A classic cost function for the zero-sum game  proposed for GANs reads as
This cost function is hard to optimize because it is concave-concave . Here we take thus the equilibrium is In Figure 1(a), 1(b) and 1(c), we show the distance from the solution, the distance of the average from the solution, and the computational cost respectively. Interestingly, all the considered algorithms converge even if there are no theoretical guarantees.
The stochastic relaxed forward–backward algorithm can be applied to Generative Adversarial Networks. Given a fixed mini-batch, under monotonicity of the pseudogradient, averaging can be considered to reach a neighbourhood of the solution. On the other hand, if a huge number of samples is available, under the same assumptions, convergence to the exact solution holds.
Appendix A Preliminary results
We here recall some facts about norms, some properties of the projection operator and a preliminary result.
We start with the norms. We use the cosine rule
and the following two property of the norm [21, Corollary 2.15], ,
Concerning the projection operator, by [21, Proposition 12.26], it satisfies the following inequality: let be a nonempty closed convex set, then, for all
The Robbins-Siegmund Lemma is widely used in literature to prove a.s. convergence of sequences of random variables.
Lemma 1 (Robbins-Siegmund Lemma, )
Let be a filtration. Let , , and be non negative sequences such that , and let
Then and converges a.s. to a non negative random variable.
The next lemma collects some properties that follow from the definition of the SRFB algorithm.
Given Algorithm 1, the following hold.
Straightforward from Algorithm 1.
Appendix B Proof of Theorem 1
[Proof of Theorem 1] We start by using the fact that the projection is firmly quasinonexpansive.
Then, we can rewrite the inequality as
Applying the Young’s inequality we obtain
Then (22) becomes
Reordering, adding and subtracting and using Lemma 2, we obtain
Then, by the definition of , reordering leads to
Summing over all the iterations, (25) becomes
Using Assumption 5 and resolving the sums, we obtain
Now we notice that We define and , thus
Therefore, Including this in (27) and doing the sum, we obtain
Let us define , . Then,
Noticing that if is constant and