1 Introduction
Meanfieldtype game theory studies a class of games in which the payoffs and or state dynamics depend not only on the stateaction pairs but also the distribution of them. In meanfieldtype games, (i) a single decisionmaker may have a strong impact on the meanfield terms, (ii) the expected payoffs are not necessarily linear with respect to the state distribution, (iii) the number of decisionmakers (“true decisionmakers”) is not necessarily infinite.
Games with nonlinearly distributiondependent quantityofinterest [1, 2, 3]
are very attractive in terms applications because the nonlinear dependence of the payoff functions in terms of state distribution allow us to capture risk measures which are functionals of variance, inverse quantile, and or higher moments. During the past, a significant amount of research on meanfieldtype games has been performed
[4, 5, 6, 8, 9, 10]. In the timedependent case, the analysis of meanfieldtype games is not without challenges. Previous works have devoted tremendous effort in terms of partial integrodifferential system of equations (PIDEs), in infinite dimensions, of Liouville, Boltzmann or McKeanVlasov type. At the same time, an important set of numerical tools have been developed to address the master equilibrium system. However, the current stateoftheart of numerical schemes is problemspecific and need to be adjusted properly depending on the underlying problem. To date, the question of computation of the master system in the general setting remains open. This work provides explicit solutions of a class of master systems. These explicit solutions can be used to build reference trajectories and several numerical schemes developed to solve PIDEs can be tested beyond the linearquadratic setting.1.1 Direct Method
The direct method consists of five elementary steps. The first step starts by setting the meanfield terms of the problem. The second step consists of the identification of a partial guess functional where the coefficient functionals are random and regime switching dependent. The third step uses the stochastic integration formula. The fourth step uses a completion of terms in oneshot optimization for both control actions and conditional expected value the control actions of all decisionmakers. The fifth and last step uses an algebraic basis of linearly independent processes to identify the coefficients. The identification leads to a (possibly stochastic) differential system of equations, providing a semiexplicit representation of the solution. These five elementary steps of the Direct method are displayed in Figure 1.
1.2 Direct Method for LQMFTG
In the current literature, only relatively few examples of explicitly solvable meanfieldtype game problems are available. The most notable examples are (i) linearquadratic meanfieldtype games (LQMFTG) [6], (ii) linearexponentiated quadratic meanfieldtype games (LEQMFTG) [7] , (ii) adversarial linearquadratic meanfieldtype games (minmax LQ, minmax LEQMFTG) [6]. In LQMFTG the base state dynamics has two components: drift and noise.

the drift is an affine function of the state, expected value of the state, control action and expected value of the control actions of all decisionmakers. The coefficients are regime switching dependent.

the noise are combinations of diffusion, GaussVolterra, jump, regimeswitching process where the noise coefficients are affine functions of the state, expected value of the state, control action and expected value of the control actions of all decisionmakers. The coefficients are regime switching and jump dependent.
To the state dynamics, one can add a common noise which is a diffusionGaussVolterrajumpregimeswitching process. The cost functions are polynomial of degree two and include the weighted conditional variances, covariances between state and control actions of all decisionmakers. In addition, the cost functional is not measured perfectly. Only a noisy cost is available.
This basic model of LQ meanfieldtype games captures several interesting features such as heterogeneity, riskawareness and empathy of the decisionmakers.
To solve LQMFTG problems one can use the direct method proposed in Figure 1. This solution approach does not require solving the BellmanKolmogorov equations or backwardforward stochastic differential equations of Pontryagin’s type. The proposed direct method can be easily implemented by beginners and engineers who are new to the emerging field of meanfieldtype game theory.
For this broader class of LQMFTG problem one can derive a semiexplicit solution under sufficient conditions. The existence of solution to the master system corresponding to the LQMFTG problem can be converted into an existence of solution to a system of ordinary differential equations driven by common noises. In some particular cases, these systems are stochastic Riccati systems and extensions of Riccati to include some fractional order terms.
1.3 Direct Method beyond LQMFTG
The direct method is not limited to the linearquadratic case. Direct method can be extended to a class of LEQMFTG, minmax LQMFTG and minmax LEQMFTG. In this article, we present several examples to illustrate how direct method addresses nonlinear and/or nonquadratic meanfieldtype games. The examples below go beyond LQMFTG, LEQMFTG and minmax LQ problems.
Our contribution can be summarized as follows. We provide semiexplicit solution for classes of meanfieldtype game problems presented in Table 2. Several noises are examined: Brownian motion , regime switching , jump process , and GaussVolterra process . The GaussVolterra noise processes are obtained from the integral of a Brownian motion with a suitable kernel function. In addition, several type of common noises are considered:
Problem  State  Cost  Noise 

Prop. 1  Drift:  Brownian:  
Jump :  
Switching :  
Prop. 2  Drift:  Brownian:  
Prop. 3  Drift:  Brownian:  
Switching :  
Prop. 4  Brownian:  
Common noise:  
Drift:  Jump :  
Common Jump :  
Switching :  
Common GV:  
GaussVolterra:  
Prop. 5  Drift:  Switching :  
Prop. 6  Drift:  Brownian :  
Jump:  
Prop. 7  Drift:  Brownian :  
Switching :  
Prop. 8  Drift:  Brownian :  
Switching :  
Prop. 9  Drift:  Brownian :  
Switching :  
Common noise:  
Prop. 10  Drift:  Brownian :  
Jump:  
Switching :  
GaussVolterra: 
To the best of the authors’ knowledge this is the first work to provide semiexplicit solutions of meanfieldtype games beyond LQ and under GaussVolterra processes.
Structure
The rest of the article is structured as follows. Section 2 presents semiexplicit solutions to some nonlinear nonquadratic stochastic differential games. In Section 3 we formulate and solve various meanfieldtype games with nonquadratic quantityofinterest and provides semiexplicit solutions using a direct method. Section 4 presents semiexplicit solutions to some nonquadratic meanfieldtype games driven by GaussVolterra processes. Numerical examples are presented in Section 5. The last section summarizes the work.
Notation  Description 

Brownian motion  
common Brownian motion  
Common GaussVolterra process  
GaussVolterra process  
set of jump sizes  
Radon measure over  
compensated jump process  
common compensated jump process  
state  
trend  
delayed state  
conditional state  
regime switching process  
set of decisionmakers  
control action of decisionmaker  
conditional control action of 
Notations
We introduce the following notations (see Table 3). Let be a fixed time horizon and
be a given filtered probability space. The filtration
is the natural filtration of the union augmented by null sets of In practice, is used to capture smaller disturbance, is used for larger jumps of the system, is used for GaussVolterra processes (including sub or super diffusion). Let is the set of measurable functions such that . is the set of adapted valued processes such that The stochastic quantitydenotes the conditional expectation of the random variable
with respect to the filtration Note that is a random process. Below, by abuse of notation we use for the values inside the jump processes or the regimeswitching process . The set of decisionmakers is denoted by An admissible control strategy of the decisionmaker is an adapted and squareintegrable process with values in a nonempty subset . We denote the set of all admissible controls by :Decisionmaker chooses a control strategy to optimize its performance functional. The information structure of the problem under perfect state observation and under common noise observation
2 Some Solvable MeanFieldFree Games
We start with meanfieldfree settings where logarithm, logarithm square, LegendreFenchel duality, and power payoffs are presented. The cost functions are not necessarily quadratic and the state dynamics is not necessarily linear.
2.1 Logarithmic Scale
Consider a set of decision makers interacting in the following nonlinear nonquadratic meanfieldfree game:
(1) 
and with a given initial condition , and and is an integer, and
Proposition 1
The nonlinear nonquadratic meanfieldfree Nash equilibrium and corresponding equilibrium cost are given by:
where and solve the following differential equations:
(2) 
where , and .
.
Proof. Consider the following guess functional:
By applying Itô’s formula for jumpdiffusionregime switching processes, the gap between the cost and the guess functional can be computed and it is given by
(3) 
Noting that
with equality iff the announced result follows.
Remark 1
For the system reduces to the following ordinary differential equations:
(4) 
2.2 Logarithm square
Consider the following nonlinear nonquadratic meanfieldfree game:
(5) 
and with a given initial condition .
Proposition 2
Asume that The nonlinear nonquadratic meanfieldfree Nash equilibrium and corresponding optimal cost are given by:
where solves the following differential equation:
where .
Proof. Consider the following guess functional:
Applying the Itô’s formula yields
Thus, the gap is given by
By performing square completion one obtains
then,
Finally, the announced result is obtained by minimizing the terms.
2.3 LegendreFenchel
We consider a convex running loss functions
(6) 
where and Recall that the LegendreFenchel transform of
Proposition 3
Assume that are positive. Then, the game problem (6) has a solution:
(7) 
with
(8) 
where