1 Introduction
The nature of the logreturns of a financial asset is characterized by heavytails, significant skewness and kurtosis. Parametric modelling of the data as explained in
[13, 19, 20]concentrate mainly on fitting a normal distribution, generalized hyperbolic distributions, or heavy tail distrubution from extreme value family. This requires us to obtain maximum likelihood estimates (MLEs) for the paramaters of the distribution, which inspite of suffering from overfitting are also inconsistent estimates for the expected return. The usual binomial asset pricing model is a discrete time analogue to the continuous time geometric Brownian motion (GBM)
[19, 20]. However, empirical studies indicate [16] that financial logreturns are characterized by heavytailed distributions. Using the geometric Brownian motion to model stock/asset prices adversely affects the estimation of quantities like VaR, Conditional Value at Risk (CVaR) and other “coherent” risk measures [15, 16]. A single normal distribution fails to account for the heavytails, where a mixture normal ditribution often performs better. In this paper, we consider mixture of normal distributions as starting point for modeling logreturn. Naturally, it motivates us for the mixture of GBM on the stock prices.In this context, a nonparametric Bayesian (NPB) [9] approach to modelling has three advantages. The first advantage being data adaptivity. In this paper we present methodology accompanied by an algorithm that takes the data as its only input. Secondly, as we use a DP approach to model the data. The DP essentially fits a finite mixture model with regard to the choice of the base measure. The number of components that are fitted to the data, is learned by augmenting a stochastic process to the algorithm. Finally the tails are accounted for by the finite mixture model that is fitted; since the components of the mixture model vary, the tail behaviour explanation is better explained by changes in the precision parameter of the basemeasure.
We develop a multivariate distribution to model the multiple asset’s return using a multivariate copula; which models the marginal distribution using the DP prior. In marginal distribution, the DP prior takes care of heavytail behavior of the single asset return and similarly, the multivariate
copula incorporate the heavytail behavior of joint distribution of the multiple asset return.
In section 2, we present the motivation for mixture models. In this section, we showed that noarbitrage condition holds, if the marke follows mixture of GBM. The mixture of GBM can incorporate heavytail behavior. In section 3, we aim to establish the framework for the DP prior. We have different subsections, (3.1) through (3.2); the first deals with the modelling of a single asset using the DP priors. Leter the approach extend to the higher dimensions using an elliptical copula for multiple asset return. In section 4
, a gradual development to an optimal risk measure is provided indicating the advantages and disadvantages for suggested measures. The performance of these risk measures is assessed based on the suggested probability model for logreturns of assets. In section
5 we elaborate on the computational details that are needed to implement the modelling in both univariate and multivariate cases. Finally, section 6 deals with the application aspects of the suggested modelling approach in univariate as well as multivariate datasets. Section 7 concludes the paper.2 Motivation for Mixture Models
Suppose is the price of a stock at time point , which follows geometric Brownian motion (GBM). The stochastic differential equation (SDE) corresponding to the GBM is:
where is Brownian motion, is drift parameter, is volatility. Then solution of SDE, namely geometric Brownian motion or stock price model, is as follows:
The log return is , with expectation
and and variance
. So, we have, . With this background in mind, we consider the logreturn as mixture of Brownian motions, which is as follows:where and
with expectation and variance is . Note that, we can express as
So the logreturn can be expressed as,
where ’s are independent Brownian motions for on same complete probability space and filtration by is denoted by .
Theorem 2.1.
If then
Remark 2.1.
The is martingale. If , where is the risk free interest rate, then can be interpreted as discounted logreturn.
Remark 2.2.
This theorem implies if the market follows finite mixture of GBM, then there is no arbitrage opportunity in the market. However, the market is incomplete.
3 Methodology for Modelling Asset Return
The seminal paper by [18] provided a constructive definition of the DPprior and their mathematical properties. Recent advancements in Monte Carlo techniques makes it possible to implement the DPprior for constructing various kinds of generalized mixture models. The DP is obtained using an infinitegeneralization of the finiteDirichlet distribution. Mixture models consider a kernel, on the space , where is a measurable function satisfying the condition that for all , is a density with respect to some finite measure on . Let denote the class of all priors defined on . Then for some , a prior is induced on the density of , via the map . The DP priors are useful in providing infinite dimensional extensions to finite dimensional mixture models and consequently assign priors on unknown distributions. The predictive distribution for the problem is then given by,
(3.1) 
which essentially approximates , given . Furthermore, if we can equip , with natural topologies, like the weak or norm topology, issues related to posterior consistency of estimates obtained using (3) can be found in [12]. [26] introduced an approach to model Value at Risk (VaR), by assigning DPs priors directly on the logreturn of a financial security/asset. According to [11], the DPs are not used directly to model data. In this paper, we use the mathematical formulation, similar to [12], with as the univariate normal distribution and as the required DP, with the base measure as . This is as an alternative attempt to model financial logreturn via a nonparametric Bayesian approach. The assumptions of normality on the base measure does not affect the composition of data, in terms of modelling around the locations.
let us consider , then we have the corresponding measurable space . Let denote the corresponding measurable parametric space, then a prior , is a probability measure on . If , be a DPprior on and then,
which holds for all [9]. It follows that,
(3.2)  
(3.3) 
3.1 Modelling Single Asset:
Throughout this paper we shall refer to as the pricepath, as its corresponding logreturn. Associated with
the volatility measure used is the standard deviation of the increment process
. Then, , where is the price of the security at time . The model we consider as,(3.4) 
According to [9], we see that if, is a sample of size 1, then for , , therefore,
We refrain from making any assumption on the kernel , in this section to keep the discussion of the ensuing consequences of (3.1) as generic as possible.
Note 3.1.
We can consider our base measure to be a Weiner measure, in particular Brownian motion ([20]) when modelling logreturns . Then, the Bayes’ risk for our approach is the same as that of the volatiliy parameter, of the geometric Brownian motion. Given the sample
, DPprior probability structure allows us to adaptively update the estimate of our base measure. The hierarchy then induces a probability structure which models the underlying stock/asset price.
Therefore, to explicitly obtain the path of the asset price, we integrate the above over all possible partions of , and . The stickbreaking construction by [18] is then used to provide us with an induced map on
(3.5)  
(3.6) 
A desirable property of the DP prior on is conjugacy, as mentioned in [9] and [18]. Let be a Dirichlet process defined on with parameter . Then the conjugacy property states, . Here is the measure assigning probability 1, to . In other words, if , be sampled parameters corresponding to logreturns of the asset, the posterior map given the parameters is also a DP with suitably altered parameters. We now aim to derive the form of the distribution function of the induced probability map on the logreturns .
Theorem 3.1.
For a stochastic process , on the measurable space , with parameters, , if , is a probability measure assigned on , in particular if , then the distribution function induced on is,
where, is a realization of size from , , and
The last equality is obtained by noting that the integral commutes with the finite sum.
Note 3.2.
Typically, we know that is location invariant. In that context clustering mainly affects the volatility parameter of . Therefore, the clusters may be interpreted as volatility regimes located in the . On the other hand, existence of bull and bearmarket trends, affect the location of the process . Altogether a locationscale kernel mixture would therefore do justice to both the mentioned facts in conjunction for modelling the increment process.
The development of the DPpriors is fundamentally based on the Polya urn Processes and Chinese restaurant Processes. In this paper we consider , being the number of urns, and the data points as balls that are given at the start of the experiment. The urns in the context of can be thought of as a partition. Theoretically the partition size can be infinite. We have a prior and a basemeasure , which corresponds to the prior knowledge regarding the urnoccupancy and distribution of occupied urns. We then perform the random experiment of throwing the balls. Here serves as the tuning parameter controlling the concentration of balls in urns. serves as probability assigned to the urns/partition. With respect to we should have an idea regarding the furthest expected urn occupancy in our throw. Thus one throw produces of urns that are occupied. Note that is the maximum number of urn that can be occupied by the balls.
Lemma 3.1.
For a basemeasure , a tuning parameter , and the resulting set , for given , the urnmodelling occurs almost surely in urns across iterations .
3.2 Modelling Multiple Assets through Copula
Given a collection of assets, , a portfolio ([7]) is a
vector consisting of the appropriate weights. In this section we assume that we have a given portfolio, that is a set of appropriate weights. We consider the portfolio in terms of the associated logreturn for the concerned
assets, . We also assume that an investor allocates a fixed sum according to the portfolio and the observable prices are at time . The associated logreturn over this chosen time horizon, is denoted by , for and . We aim to model the marginal distribution of in this section, using the DPpriors.The correlations between the assets can be modelled using the
dimensional multivariate probability distribution. In the previous section (
3.1), we presented the methodology to model the return for a single asset using a DPprior. In this section we use the copula technique to model the marginal distribution using the DPprior. The correlation structure is modelled using the elliptical copula such as multivariate copula.Elliptical copulas correspond to the class of elliptical distributions through the Sklar’s theorem. If denote the multivariate CDF of an elliptical distribution, , the marginal of the component and its corresponding inverse for . Then using the Sklar’s Theorem the elliptical copula is determined via
The uniqueness of the copula obtained using Sklar’s theorem ([21]) relies on the assumptions that the marginal(s) all have continuous CDFs. This facilitates the application of the probability integral transform on the marginal(s) to formulate the copula. In section (3.1) we have showed that the CDF induced on is continuous. Consequently, modelling the assets using a DP prior and using the induced CDFs as marginal(s) ensures the existence of a unique copula.
The copula is defined given a covariance matrix . For instance, the Gaussian Copula [25] has the following structure,
where . The measure of association between the assets being denoted by . In this paper we consider an appropriate measure of concordance [17] to obtain the entries , where . An interesting property of the family of concordance measures [5], is consistency. If
is a sequence of continuous random variables with a copula
, then as the copula converges (pointwise) then the measure of concordance also converges. In this paper we use the Kendall’s as to model the association between assets under consideration. Considering this with respect to the results in Section (3.1) we have the following lemma.Lemma 3.2.
Let and be the associated copula. Then by uniqueness of the fitted copula we have,
then,
where denotes, almost surely.
4 Coherent Risk Measures
In this section we present the coherent risk measures and consider their performance in reference to the logreturns of marginal components of a asset portfolio being modelled using DP priors. First, we present a discussion regarding the development of coherent risk mesures, followed by a discussion about how the induced probability structure can be incorporated to evaluate portfolios.
Let be the sample space. Let the map denote the map corresponding to the loss or gain for a asset protfolio over the time horizon . Then this map is termed as the risk associated with the portfolio. For instance,
(4.1) 
Then, can be interpretted as the loss or gain (risk), in terms of return from the asset portfolio. In general if, be a probability space, and be the set of all such riskmaps (), a risk measure is defined as a fucntion .
Remark 4.1.
According to [4], implies that a positive value is assigned by the measure to the risk . Therefore, is the minimum amount of capital that is to be added to , by investing in the riskfree rate to surpass any level of risk. Conversely, implies that can be cashed from the current position without any risk.
Commonly used measures of risk, such as Value at Risk associated with , suffers from a variety of deficiencies. These have been identified ([23], [3], [2]) to formulate a much robust class of measures of risk given by the “coherent” risk measures. [2] states that coherent measures of risk should satisfy 4 properties: (i) subadditivity (ii) positivehomegeneity (iii) translation invariance and (iv) monotonicity. [23] changed how portfolio risk was quantified by establishing a generalized theory of coherent measures risk, using distortion functions. Distortion functions are always defined using an associated probability measure with the risk . [23] also established the Choquet integral expressions [8], for the commonly used measures of risk. A distortion function is defined with respect to a probability measure over a measure/probability space . If be an increasing concave function with and then,
The dual distortion function being,
The riskmeasure is defined through the Choquet integral,
(4.2)  
It is evident that the nature of the distortion function affects the “coherence” of the obtained riskmeasure. Also, if we assume that is integrable, then the distorted risk measure is the expectation under the reweighted probabilities. By using this construction we obtain risk measures that are coherent.
Remark 4.2.
The is not a coherent risk measure. The class of coherent distortion risk measures can be further extended to formulating exhaustive distortion risk measures that are both coherent and complete. Completeness ([4]), of a distortion risk measure relates primarily to the property of the distortion function to utilise information from the original loss distribution associated with the risk triplet , where is the associated risk.
Formally, if be the associated risk variable over , then is a complete distortion risk measure generated by if,
where . In conjunction to this definition, it is important to state two theorems from ([4]).
Theorem 4.1.
For a distorted probability defined by a distortion function , is complete is implied, and implied by is stricly increasing.
The proof immediately follows from the definiton of completeness for distorted risk measures.
Theorem 4.2.
If is a distorted risk measure, then it is an exhaustive distortion risk measure if and only if is concave and strictly increasing. Also it is exhaustive if and only if is concave and , for all .
The proof can be found in [4]. These theorems establish mainly that for a riskmeasure to be complete the distortion fucntion should effectively incorporate all the information in the loss distribution .
Remark 4.3.
: It is clear, that formulation of a riskmeasure calls for exercising caution on two fronts viz., the selection of an appropriate loss distribution to model the risk function and the choice of an appropriate distortion function , to incorporate all of the information in the associated probability to measure the risk of a position.
In reference to the methodology developed above for modelling the logreturn of a
asset portfolio using DP priors, we now proceed to look at the performance of the aforesaid riskmeasures. The DP is a hyperprior with respect to
, as it assigns a DPprior to . In the beginning of this section we have shown how the logreturn is a valid risk (measure of gain or loss in net worth) associated with a portfolio. In the section (3.1), we have seen that the DP prior induces a probability measure on the logreturn . Let us consider a collection of such logreturns corresponding to respective assets that are modelled using DP priors. Their covariance being accounted for by an appropriate copula as shown in section (3.2). Then we have an induced probability for the probability space, , through a DPprior . The associated risk is given by equation (4.1) on which the DP induces a probability structure. It is more appropriate to consider equation (4.1) given . Moreover, . Explicitly,Here the DP is a multivariate dependent Dirichlet process with marginals as DPpriors, and is the associated copula with an appropriate concordance measure .
Now we consider the commonly used measures of risk. It is important to consider that given a filtration , upto iteration , the associated probability with the loss for a single asset is given by the follwing equation, ( being the number of iterations until the mixing RPM estimate is obtatined.)
(4.3) 
where are Bayes’ estimates obtained after succesful convergence for the parameters of the DP prior.
4.1 Risk Measures
Value at Risk: VaR
For an appropriate risk , is defined as,
It is obtained as a Choquet integral ([8], [4]) by setting,
Using (4.2) we have . Then, is simply the
% quantile of the loss distribution associated with
. By assigning the DP prior on the parameter space , the induced probability distribution on the logreturn variable is given by the theorem (3.1). The Value at Risk for a single asset portfolio becomes an % quantile of the logreturn distribution. The problem for estimating is then equivalent to estimating the % quantile for . Assuming that the market assumptions for the model hold, we have the GBM model for the logreturn for an investment horizon ,(4.4) 
When comparing the models (4.3) and (4.4) in terms of estimating quantiles we have clear picture regarding the importance of the loss distribution. The % quantile for (4.4) is obtained by solving,
whereas, for (4.3) we solve the following equation for ,
Here is finite. Since the sum is finite we have,
, denotes the distribution function for the standard normal. However, there does not exist any closed form expression for the above equation. It is evident that the estimtes for will be different in the two cases. The difference being a direct consequence of (4.2), which results in a significant change in the estimate of . The has been known to suffer from numerous deficinecies, the foremost of them being lack of subadditivity. It is not a convex measure of risk, therefore diversification in terms of assets does not provide room for optimization [10]. Despite of these discrepancies it is widely used as a risk measure due to its simplicity in interpretation.
The reason behind considering quantile estimation for a single asset is to elucidate the significance of the loss distribution . For modelling the risk for a asset portfolio, we consider univariate modelling of assets using the equation in (4.3) and consider the covariance structure specified by fitting an appropriate copula. The distortion function for remains constant over the interval , which results in not being a complete risk measure. This follows from the equivalent condition stated in Theorem (4.1). Furthermore, the distortion function over the interval does not make it suitable for being an exhaustive distortion risk measure as well.
ESF/CVAR: Expected Shortfall/Conditional Value at Risk
This measure of risk was first introduced by [3]. For an appropriate risk , or is defined as,
The associated distortion function being given by ,
depends on the and therefore significant changes are expected, when considering variations in the distribution from (4.4) to (4.3). The discussion with respect to the improvements is estimation of risk using equation (4.3) as the associated loss distribution holds true for as well. In this case as a distortion function is better, in terms of information content from the loss distribution . Furthermore, it is easy to see that is nondecreasing and concave in nature. Consequently, is a coherent riskmeasure.
WT: Wang’s Transform
Despite of being coherent, suffers from sensitivity towards severity of loss in final net worth, that is higher risk below % points of the lossdistribution. This serves as the major downside for ; moreover, being nondecreasing ( in the interval ) does not qualify the riskmeasure to be a complete one. The Wang’s Transform is a valid measure belonging to the class of complete riskmeasures. [24] draws heavily from the general principles establshed in [23] to suggest a distortion function that concentrates on symmettric parametric family viz., Normal class of probability measures.
The advantage of considering a symettric family being reflected in the dual distortion function . The suggested being given by,
where is the corresponding market price of risk. It follows from the definition of , that for is a concave, and if , is convex. Therefore, with reference to (4.2) one can easily derive that the distorted risk measure corresponding to , with is complete and exhaustive. Coupled with the stated properties, using equation (4.3) as an alternative to (4.4) provides better estimates for the riskmeasure associated to .
5 Computational Issues
In this section we consider computational aspects for applying the suggested approach to model the data. We use the blocked Gibbs sampler as an MCMC algorithm that is used to update the cluster specific parameters. The advantages of using the blocked Gibbs sampling can be summarized into two factors. Firstly, instead of using just a scaleprior we can now use locationscale families of DPs to model the data. The blocked Gibbs sampler is suited specifically for the purpose of simultaneously updating multiple parameters. Secondly, we have a conjugate prior for
for the blocked Gibbs in general. This makes the application more dataadaptive and generalized in nature. This section is divided into two parts. The first discussion is about the alterations proposed in case of irregular clusters. This will be preceded by a short digression explaining what are regular clusters with respect to the current theoretical setup. The second discussion mainly features an MCMC algorithm to implement the procedure.We make the following assumptions,
(5.1)  
(5.2) 
In light of the (5.2), a subtle yet serious issue is the formation of improper clusters. Broadly we are faced with the following cases: (i) , (ii) , and (iii)
. The second case shows the presence of an improper cluster, for which second order moments loose interpretability.
Remark 5.1.
Let , and denote the number of points allocated in cluster for a particular iteration . If , we say that for the iteration the cluster is irregularly occupied. The cluster looses its usual interpretability in terms of moments.
Let us assume that, for a particular iteration, , are the unique values of . This characterizes the data where and . Then for , such that , we have the distribution function from (3.1),
If ,
(5.3) 
which clearly shows that the data will tend to cluster in clusters characterized by . Thus, according to [11], fitting such a prior to the parameter space, should favor clustering on the financial logreturn data.
5.1 The Algorithm
Here we present the MCMC algorithm that is used to implement the approach presented in the previous sections. The algorithm is presented using the assumptions made in (5.1) and (5.2); the steps are as follows:
(i) Setting Hyperparameters

Select an appropriate ; consequently, and and initialize .

Set hyperprior values for .

Set as the NormalInverse conjugate prior for the normal kernel (5.1).
(ii) MCMC Posterior Updates

For and update the parameters for the multinomial sampling by,
Then draw a sample of size from . Assign to to formulate , with and .

Update the precision parameter using the conjugacy of the blocked Gibbs,

Calculate the cluster occupancy using,

For ,

, resample from .

, then the posterior update of prior .

If , then the posterior update of prior

, then the posterior update for prior is,
These are the posterior Gibbs updates using the previous iterations posterior as the prior for the next. Note that is just the starting value for the parameters.


Repeat this until stabilizes to obtain the mixing RPM.
5.2 Cluster Regularization
In the previous section we have seen that the estimate(s) of the RPM and related quantities are simply bootstrap estimates. We can estimate the augmented information for when considered for a fixed . For instance, we have
as the estimate for the expected cluster occupancy given the filtration . Thus, for a fixed if an irregular cluster in located we do not halt the MCMC procedure, since immediate inference from the posterior in terms of interpretability is not required. Therefore, in a collective manner over all iterations the process remains regular. The interpretation makes sense when considering the possibility of a sample of size 1 from a cluster
. In particular, this is the case for extreme observations or outliers. This approach allows us to make room for heavytails in the RPM. This being indicated by sparsely occupied extreme clusters.
6 Application
In this section, we present the application of the proposed methodology to two different datasets. The section is broken into two different parts. Firstly, we consider univariate modeling of an asset using the DP prior. We present the estimates along with their respective confidence intervals. This is done for three different stocks, namely IBM, Intel, and NASDAQ. Secondly, we consider multivariate modeling of a dataset consists of a portfolio consisting of IBM, Intel, and NASDAQ. Following which a multivariate application is carried out on a much larger dataset consisting of an optimized portfolio over
assets from the National Stock Exchange of India (NSEI). Note that these 51 stocks make up the index, “Nifty 50” for Indian stock markets. For optimizing the portfolio, we use a meanvariance optimization [14] to select a suitable portfolio. We use an appropriate fitted elliptical copula to account for the correlation structure amongst the 51 stocks in the portfolio.6.1 Risk and Return Analysis for Single Asset:
The logreturn of the IBM and Intel and the NASDAQ index are modeled using the DP prior and the figure (1) showing the DP fit against BlackScholes to Intel, IBM and NASDAQ daily logreturns. The data is collected for these three stocks for a year starting from 1 July, 2015. Overall there are 246 days of logreturn for the three assets. Simple visual inspection is enough to conclude that DP fit models the logreturn much better in comparision to BlackScholes. Table (2) presents comparative capabilities of different methods for density estimation and return path modeling of the three assets. A comparison with the default DPdensity and Polya Tail Free priors from DPpackage in R [1] is shown in Figures (5). We also calculate the Highest Posterior Density (HPD) intervals in Table (4) with . It can be seen that the DP prior results in posterior intervals with shortest length with a probability of 1 of contating the mean return and volatility estimate for the underlying asset. In the comparative fits shown in Figures (5), we see that the tendency to detect modes and changes in tail behaviour is increased in the mDP prior. The default DPdensity fails to identify modes completely, while the PTdensity shows modes in the 2 interval remaining neutral to changes in the tail behaviour.
Remark 6.1.
Considering the kernel density estimate as a benchmark, we compare the performances of the BlackScholes and DP simulated returns based on mean square deviations. Table (
Figures and Tables) presents a comparative study for the same. Table (3) compares the estimates of the various riskmeasures discussed in section 4, obtained under the empirical method of modelling logreturns using the kernel density estimate and the multivariate methodology presented.6.2 Risk and Return Analysis for Multiple Asset:
Here we present the application of the multivariate modeling of logreturn using the method as discussed in section (3.2). Here we conducted two exercises viz., first we apply the methodology over three assets as considered in (6.1), then we apply the same over the dataset with 51 stocks from Indian stock market. We estimate the covariance matrix for copula using the [6, 22]. After modelling the three assets using a copula, we simulate the individual returns with respect to modelled correlation structure. Figures in (2) show the scatter plots for the observed and fitted daily logreturns for the three assets. The performance of the tcopula, in modelling the observed correlation structure is compared in the figure (3). Visual inspection reveals that the observed correlation structure is preserved in simulated logreturns.
Next, while modelling 51 assets we compare performance of the methodology over the first two principal components for the observed and simulated returns. Figure (4
) shows the scatter plots for the first two principal components in the observed and simulated data respectively. The returns were simulated from a tcopula with 10 degrees of freedom, where the marginals were modeled using the DP prior approach discussed above. Visual inspection indicates that the proposed methodology is able to model the variation in the 51 stocks; along the first two major directions of the correlation structure in the data.
7 Conclusion
In this paper, we presented the Dirichlet Process (DP) prior for modeling the logreturn of the single asset(s). In comparison to the approach of [26] we have proposed following alterations. First, we assign a DP prior over parameter space which helps us to avoid inducing a discrete RPM over the logreturn. Note that [26] induces a discrete RPM over the logreturn almost surely. Consequently, we face the problem of quantile estimation for the logreturns, which was avoided by [26]. To deal with this problem, we develop the necessary results that assure the fitting of an RPM, which is almostsurely a continuous finite mixture over the logreturn; all the while preserving the hierarchy. The proposed results rely heavily on the urnscheme approach for interpreting DP. For a given set of observations over a fixed time horizon, theoretically assigning a DP involves an infinite mixture modeling. We used the conjugate structure provided by the blockedGibbs sampler to augment stochastic processes unique to each question. This augmentation technique helps us to avoid the reversible jump MCMC.
We extend this approach to introduce a multivariate distribution to model the return on multiple assets via a copula; which models the marginal using the DP prior. This helps us to keep the already existing nonparametric univariate approach the same even in multivariate applications. The application of this methodology comprises of fitting RPMs over univariate and collection of univariates with a copula in two different datasets. We compare different risk measures such as Value at Risk (VaR) and Conditional VaR (CVaR) in both the datasets.
References
 [1] Jara. Alejandro, Hanson. Timothy, Quintana. Fernando, Peter. Mueller, and Gary. Rosner. Properties of distortion risk measure. Journal of Statistical Software, 40(5):1–30, 2011.
 [2] P. Artzner, F. Delbaen, J. M. Eber, and D. Health. Thinking coherently. Risk, 10:68–71, 1997.
 [3] P. Artzner, F. Delbaen, J. M. Eber, and D. Health. Posterior consistency of dirichlet mixtures in density estimation. The Annals of Statistics, 27:143–158, 1999.
 [4] A. Balbas, J. Garrido, and S. Mayoral. Properties of distortion risk measure. Methodology and Computing in Applied Probability, 11(3):385–403, 2009.
 [5] U. Cherubini, E. Luciano, and W. Vecchiato. Copula Methods in Finance. Wiley Finance Series., 1 edition, 2006.

[6]
Sourish. Das and Dipak. K Dey.
On bayesian inference for generalized multivariate gamma distribution.
Statistics and Probability Letters, 80:1492–1499, 2010.  [7] F. Delbaen and W. Schachermayer. The Mathematics of Arbitrage. Springer Finance., 3 edition, 2006.
 [8] D. Denneberg. NonAdditive Measure and Integral. Springer–Netherlands, Theory and Decision Library 27, 1 edition, 1994.
 [9] T. S. Ferguson. A bayesian analysis of some nonparametric problems. The Annals of Statistics, 1:209–230, 1973.
 [10] H. Follmer and A. Shield. Convex measures of risk and trading constraints. Finance Stoch., 6(4):429–447, 2002.
 [11] A. Gelman, John B. Carlin, Hal S. Stern, David B. Dunson, A. Vehtari, and Donald B. Rubin. Handbook of Heavy Tailed Distributions in Finance., volume 3. Chapman and Hall/CRC, Edition, 2004.
 [12] S. Ghoshal, Jayanta. K. Ghosh, and R. V. Ramamoorthi. Posterior consistency of dirichlet mixtures in density estimation. The Annals of Statistics, 27:143–158, 1999.
 [13] A. Habib. Calculus of Finance. University Press, 2011.
 [14] Harry M. Markowitz. Properties of distortion risk measure. Journal of Finance, 7(1):77–91, 1952.

[15]
Alexander J. McNeil and F. R??diger.
Estimation of tailrelated risk measures for heteroscedastic financial time series: An extreme value approach.
Journal of Empirical Finance, 7:271–300, 2000.  [16] S. T. Rachev. Handbook of Heavy Tailed Distributions in Finance., volume 1. Elsevier–North Holland.
 [17] M. Scarsini. On measures of concordance. Stochastica, 8, 1984.
 [18] J. Sethuraman. A constructive definition of dirichlet priors. Statistica Sinicia, 4:639–650, 1994.
 [19] Steven E. Shreve. Stochastic Calculus for Finance I. Springer, 1 edition, 2004.
 [20] Steven E. Shreve. Stochastic Calculus for Finance II. Springer, 1 edition, 2004.
 [21] A. Sklar. Fonctions de répartition à n dimensions et leurs marges. de l’Institut de Statistique de L’Université de Paris., 8:229–231, 1959.
 [22] Das. Sourish, Halder. Aritra, and K. Dey Dipak. Regularizing portfolio risk analysis: A bayesian approach. Methodology and Computing in Applied Probability, 19:865–889, 2017.
 [23] S. Wang. Premium calculation by transforming the layer premium density. ASTIN Bulletin, 26(1).
 [24] Shaun S. Wang. A class of distortion operators for financial and insurance risks. Journal of Risk Insuranc, 67:15–36, 2000.
 [25] J. Yan. Enjoy the joy of copulas: With a package copula. Journal of Statistical Software, 21(4), 2007.
 [26] Bedard T. Zarepour, M., Feldheim, and Darbowski A. R. Return and value at risk using dirichlet process. Applied Mathematical Finance, 15:205–218, 2008.
Appendix: Proof
Proof of Theorem 2.1
Comments
There are no comments yet.