Modeling and Decoupling Systemic Risk

07/21/2021
by   Jingyu Ji, et al.
0

Identifying systemic risk patterns in geopolitical, economic, financial, environmental, transportation, epidemiological systems and their impacts is the key to risk management. This paper proposes a new nonlinear time series model: autoregressive conditional accelerated Fréchet (AcAF) model and introduces two new endopathic and exopathic competing risk measures for better learning risk patterns, decoupling systemic risk, and making better risk management. The paper establishes the probabilistic properties of stationarity and ergodicity of the AcAF model. Simulation demonstrates the efficiency of the proposed estimators and the AcAF model's flexibility in modeling heterogeneous data. Empirical studies on the stock returns in S P 500 and the cryptocurrency trading show the superior performance of the proposed model in terms of the identified risk patterns, endopathic and exopathic competing risks, being informative with greater interpretability, enhancing the understanding of the systemic risks of a market and their causes, and making better risk management possible.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/13/2020

Encoded Value-at-Risk: A Predictive Machine for Financial Risk Management

Measuring risk is at the center of modern financial risk management. As ...
10/06/2009

A multiagent urban traffic simulation. Part II: dealing with the extraordinary

In Probabilistic Risk Management, risk is characterized by two quantitie...
04/24/2021

Regshock: Interactive Visual Analytics of Systemic Risk in Financial Networks

Financial regulatory agencies are struggling to manage the systemic risk...
07/23/2006

Ideas by Statistical Mechanics (ISM)

Ideas by Statistical Mechanics (ISM) is a generic program to model evolu...
01/17/2018

Evaluating the role of risk networks on risk identification, classification and emergence

Modern society heavily relies on strongly connected, socio-technical sys...
06/12/2019

Applying economic measures to lapse risk management with machine learning approaches

Modeling policyholders lapse behaviors is important to a life insurer si...
12/13/2018

Next Hit Predictor - Self-exciting Risk Modeling for Predicting Next Locations of Serial Crimes

Our goal is to predict the location of the next crime in a crime series,...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Systemic risk refers to the risk of collapse of an entire complex system due to the actions taken by the individual component entities or agents that comprise the system. Systemic risk may occur in almost every area, for example, financial crisis, flooding, forest fire, earthquake, market crash, economic crisis, global disease pandemic (like COVID-19), among many others (see [29, 30]). Typically, a system contains a number of risk sources, and once one comes first to collapse, the whole system is affected immediately, i.e., the risk sources are competing. When a disaster event (systemic risk) occurs, it may not be known what causes the event, i.e., its risk source. In such a scenario, it is of significance to decompose systemic risk into competing risks for learning risk patterns and better risk management.

Internal risk refers to the risk from shocks that are generated and amplified within the system. It stands in contrast to external risk, which relates to shocks that arrive from outside the system. Many systems (e.g., social, political, geopolitical, economic, financial, market, regional, global, environmental, transportation, epidemiological, material, chemical, and physical systems) are subject to both types of risk. For instance, the cargo ship MV Ever Given stuck in the Suez Canal on March 26, 2021, faced two major sources of risk. One is its internal operation errors corresponded to internal risk, and the other is strong winds and weather factors contributed to external risk. For more examples of risk decoupling, we refer the readers to [6].

The occurrence of systemic risk is strongly correlated with extreme events. Modeling systemic risk through modeling extreme events is one of the essential topics in risk management. Many extreme events in history have been associated with systemic risk. Over the past two decades, extreme financial events have repeatedly shown their dramatic and adverse effects on the global economy, which include the Asian financial crisis in 1998, the subprime mortgage crisis of the United States in 2008, the European sovereign debt crisis in 2013, and the “crash” of Chinese stock market in 2015. Failure to recognize these extreme events’ probability makes regulators and practitioners lack effective methods to deal with and prevent the financial crisis. As such, measuring and monitoring extreme financial events’ risk is essential in financial risk management.

Extreme value theory (EVT) has been a powerful tool in risk analysis and is widely applied to model extreme events in finance, insurance, health, climate, and environmental studies (e.g., [9], [22], [24], [8]). Extreme events often appeared dynamically and clustered in finance. In the literature, [25], [1], [5], [16], [21], [31], [20], [17], and [15] investigate the overall dynamical tail risk structures. In financial applications, [4] offer an extreme value theory-based statistical approach for modeling operational risk losses by taking into account dependence of the parameters on covariates and time; [27] study multivariate maxima of moving maxima (M4) processes and apply the methodology to model jumps in returns; [7] use the extreme expectiles to measure Value-at-Risk (VaR) and marginal expected shortfall; [12] studies the volatility clustering behavior which implies the extreme events’ behavior and structure may also change as time goes by.

In the era of Big Data, data may come from multiple sources, and the data from each source has its own generating process, i.e., its probability distribution. The models mentioned above for overall tail risk cannot capture the sources of tail risk accurately. To model extreme values observed from different data sources, there exist some recent studies, e.g.,

[13], [23], [26], [19], [28] and [14]. However, these models do not provide insights in risk sources, i.e., they do not differentiate different competing risks. Most recently, the accelerated max-stable distribution has been proposed by [3] to fit the extreme values of data generated from a mixture process (i.e., from different sources), whose mixture patterns vary with the time or sample size. The accelerated max-stable distributions form a new family of extreme value distributions for modeling maxima of maxima. They provide new probability foundations and statistical tools for modeling competing risks, e.g., endopathic and exopathic competing risks in this paper. The introduction of endopathic and exopathic competing risks are motivated from the widely used endogenous and exogenous variables in economic modeling. However, endopathic and exopathic competing risks in our model settings are tail-index implied and they show clear paths when clustered disaster events occur, and their interpretations as internal risks and external risks respectively are meaningful both quantitatively and qualitatively in a time series context.

This paper develops an endopathic and exopathic dynamic competing risks model that provides a new tool for better informative and rigorous tail risk analysis. The advantage of our model is to decouple systemic risk into endopathic and exopathic competing risks and measure them. Such decomposition methodology is new to the literature. Our model does not distinguish the data sources apriori, but refines the data’s information, characterizes the dynamic tail risk behavior of extreme events through estimated parameter dynamics, and explicitly distinguishes the risk sources, i.e., endopathic risk and exopathic risk. The implementation uses autoregressive conditional accelerated Fréchet (AcAF) distributions to model systemic risks from different sources dynamically. The AcAF model can be applied to financial markets and many other areas where endopathic risk and exopathic risk are intertwined.

This paper makes the following contributions to the growing literature on tail risk measurement in the financial market and the literature in probability theory and time series, and many applied sciences. First and foremost, we propose a new decoupling risk framework to handle systemic risk. We decouple the systemic risk into endopathic risk and exopathic risk, which is the first based on our knowledge in the field. The AcAF model has two unique features. 1) Although we do not know which data sources the observations come from, the risks from different sources can be reconstructed through the estimated results. 2) The reconstructed parameter dynamics accurately capture the behavior of different risks. Second, the empirical analysis shows our model’s superior performance in two financial markets: the U.S. stock market and the Bitcoin trading market. For the U.S. stock market, we find that exopathic risks are more volatile than endopathic risks. Under normal market conditions, endopathic risks dominate the stock market price fluctuations, while under turbulent market conditions, exopathic risks dominate. For the Bitcoin trading market, endopathic risks are more volatile than exopathic risks. Exopathic risks dominate the cryptocurrency market price fluctuations under normal market conditions, while under turbulent market conditions, endopathic risks dominate. The apparent opposite phenomena in these two markets are consistent with the actual market structure. Third, our technical proof is non-trivial and can not follow the existing literature’s proof directly. They can be applied to other scenarios involving tail processes and parameter dynamics.

The rest of the paper is organized as follows. In Section 2, we introduce the AcAF model and investigate its probabilistic properties. In Section 3, we construct the conditional maximum likelihood estimators (cMLE) for estimation and provide a theory for the estimators’ consistency and asymptotic normality. Simulation studies are presented in Section 4, which evaluate the performance of cMLE and the AcAF model’s superior performance to the existing dynamic generalized extreme value (GEV) models for heterogeneous data. In Section 5, we apply our model to three time series of maxima of maxima of negative log-returns in the stock market and Bitcoin market: one on the cross-sectional maximum losses (i.e., negative log-returns) of stocks in S&P 500, one on the intra-day maximum losses of high-frequency trading of GE stock, and the other on the intra-day maximum losses of high-frequency Bitcoin trading. Section 6 gives concluding remarks and discussions. We conclude that the real data results show that our model has a strong ability to portray the endopathic and exopathic risks of the market and capture the market’s dynamic endopathic and exopathic structure. All the technical details are given in the Appendix.

2 Autoregressive conditional accelerated Fréchet model

2.1 Background and motivation

In the era of Big Data, data generated from multiple sources meet in a commonplace. For instance, trading behavior in a market can be different from time to time, e.g., in the morning and the afternoon. Trading behavior in two different markets can be different at the same time. The recorded maximal signal strengths in a brain region can be dynamic, and their source origins can be different from time to time. The maximal precipitations/snowfalls/temperatures in a large area can be dynamic, and their exact locations can be different from time to time. In these examples, the available data are often in a summarized format, e.g., mean, median, low, high, i.e., not all details are given. As a result, the observed extreme values at a given time often come from different latent data sources with different populations. Certainly, the maxima resulted from each individual source has its data generating process, i.e., its limiting extreme value distribution is unique. As such, the classical extreme value theory cannot be directly applied to model the maxima drawn from different populations mixed together. The new EVT for maxima of maxima introduced by [3] provides the probabilistic foundation of accelerated max-stable distribution for studying extreme values of cross-sectional heterogeneous data. We will perform statistical modeling of extreme time series on the basis of this new EVT framework.

The autoregressive conditional Fréchet (AcF) model in [31] portrays the time series of maxima well. Nevertheless, it does not directly model the heterogeneous data driven by two different risk factors, i.e., endopathic risk dynamics and exopathic risk dynamics. To further advance the new EVT of maxima of maxima and the AcF model, we propose the AcAF model to characterize different sources of tail risks in the financial market, under which a conditional evolution scheme is designed for the parameter of accelerated Fréchet distribution, so that time dependency and different risk sources of maxima of maxima can be captured.

2.2 Model specification

Suppose , are latent processes, and where each is again maxima of many time series at time . Following [31] and [20], we assume

(1)

where , and are the location, scale, and shape parameters with

being a unit Fréchet random variable with the distribution function

. Specifically, we consider two latent processes and to represent maximum negative log-returns across a group of stocks or of a particular stock’s high-frequency trading whose price changes are driven by normal trading behavior and external information (e.g., sentiments), respectively. For example, with normal trading behavior the trading price changes of a particular stock can be higher during the market opening time and the market closing time, and with external information the trading price changes can be quite different from normal trading patterns, i.e., the price changes due to external information can occur at any time. The resulting maximum negative log-returns across that group of stocks or of that particular stock’s high-frequency trading can be expressed as , where each , is a set of time series whose price changes are due to corresponding price change driving factors, respectively.

Note that and are the numbers of transactions, and they can be different and itself can be different from time to time, and the corresponding causes of price changes (negative log-returns) of and cannot be fully determined, i.e., and are unobservable latent processes. Here and do not correspond to the price changes during the market opening time and the market closing time which were used as a motivating example. They should be understood as they coexist all the time and the dominant one at any given time is observed.

For model parsimony, we assume , , and follow the literature to assume as a constant and focus on the dynamics of , and , which are the pivotal parameters of modeling systemic risk and identifying risk sources. For the rest of the paper, we consider the following model:

(2)
(3)
(4)
(5)

where and are sequences of independent and identically distributed (i.i.d.) unit Fréchet random variables. and can be considered as the normal trading driving factor and external information driving factor respectively as mentioned earlier. They compete against each other. The distribution of in equation (2) is called the accelerated Fréchet distribution by [3]. In addition, , and are assumed for the model to be stationary and technical requirements.

Remark 1.

Note that although , , are all assumed greater than zero, they can be set as zero. As long as any of them are set to be zero, all theories and estimation methods developed can be easily adjusted because the corresponding dynamic equations will become constants. For example, assuming , then the dynamic equation (4) will result in a stationary solution of , where . This paper will not separately develop additional theoretical results for any of , , and being zero as we will have a simplified model with the corresponding as a constant in Section 5.2.

Note that and share the same dynamic structure in (4) and (5). The following remarks solve the identifiability problem and give unique solution in the estimation.

Remark 2.

When one of and is zero, we set as a constant equal to for model identifiability, where or , depending on which of and is zero. When both and are zero, we set corresponds to the smaller one. When both and are greater than zero, for model identifiability and under the stationary and ergodic properties of the process, we assume

, which will be determined by the sample variances in real data applications.

Remark 3.

When both and (or and ) are zero, we set as a constant equal to for model identifiability, where or , depending on which group of and or and are zero. Like in Remark 1, we have a simplified model with the corresponding as a constant in Section 5.2.

We now introduce our proposed endopathic risk and exopathic risk.

Definition 2.1.

When one of and is zero, we refer to the tail index implied endopathic risk (for simplicity, call it endopathic risk) and to the tail index implied exopathic risk (for simplicity, call it exopathic risk). When both and are zero, we define as the endopathic risk, while the exopathic risk is not defined. When both and are greater than zero, we refer to the endopathic risk and to the exopathic risk. When and (or and ) are zero, we define as the endopathic risk and as the exopathic risk.

The following two remarks rationalize the validity of the definitions of these two new endopathic risk and exopathic risk.

Remark 4.

When is a constant over time, it means that the corresponding tail index information is inherently built in and is not varying with the time (even during clustered extreme events such as financial crisis due to external risks), i.e., it corresponds to internal risk or endopathic risk.

Remark 5.

For both and being greater than zero, we define endopathic risks and exopathic risks using the idea in defining endogeneous variables and exogeneous variables in the economic modeling. It can be shown that under the correctly specified model (e.g., a regression model), the variance of the error term in a model with all exogenous variables included is the smallest compared to the variances of error terms in incorrectly specified models (e.g., missing covariates). By assuming , we define to be the exopathic risks, and as the endopathic risks. Note that endopathic risks and exopathic risks are defined for time series with clustered extreme events, and their interpretations are very different from endogeneous variables and exogeneous variables.

In real data section, we use three examples to empirically justify the validity of the definitions.

Remark 6.

In the model (2)-(5), we set , . A natural question will be why not make with . Of course, making can be done theoretically and the probabilistic properties of the model can still hold. However, will increase statistical inference complexity and estimation inefficiency, e.g., in optimization problems. In the economic modeling literature, risks are often decoupled into two main risks, i.e., endogenous (internal) and exogenous (external), for easy interpretability. Certainly, internal (external) risks can further be decoupled into more specific risks, which can be challenging. Following the economic literature, we set in this paper.

We note that the autoregressive structures used in , and can be traced back to GARCH model in [2], autoregressive conditional density model in [11], and autoregressive conditional durational model in [10]. The clustering of extreme events in time is a significant feature of the extreme value series in many applications, especially in financial time series. Empirical evidences have shown that extreme observations tend to happen around the same period in many applications. Translating this phenomenon in our model, we can say that an extreme event observed at time causes the distribution of to have larger scale (large ) and heavier tail (small tail index), resulting in a larger tail risk of . Here, a smaller tail index implies a larger tail risk. In Section 2.4

, we present a class of factor models and show the limiting distribution of maxima of maxima of the response variables to be the accelerated Fréchet types. We next prove the stationarity and ergodicity of the AcAF model.

2.3 Stationarity and ergodicity

The evolution schemes (3)-(5) can be written as

(6)
(7)
(8)

Hence

forms a homogeneous Markov chain in

. The following theorem provides a sufficient condition under which the process is stationary and ergodic.

Theorem 2.2.

For the AcAF model with , , and , the latent process is stationary and geometrically ergodic.

The proof of Theorem 2.2 can be found in the Appendix. Since is a coupled process of through (2), is also stationary and ergodic.

2.4 AcAF model under a factor model setting

In this section, we show that the limiting distribution of maxima under a factor model framework coincides with the distribution of an AcAF model. We assume both and follow general factor models,

where and are two latent time series at time , consist of observed and unobserved factors, and are two i.i.d. random noises that are independent to each other and independent with the factors , and are the conditional volatilities of and , respectively. The functions are Borel functions. Without misunderstanding, we use and to denote and , respectively.

One fundamental characteristic of many financial time series is that they are often heavy-tailed. To incorporate this observation, we make the common assumption that the random noises and are i.i.d. random variables in the Domain of Attraction of Fréchet distribution ([18]). Here and after, for two positive functions and , means , as . Specifically, we adopt the following definition.

Definition 2.3 (Domain of Attraction of Fréchet distribution).

A random variable is in the Domain of Attraction of Fréchet distribution with tail index if and only if and , , where

is the cumulative distribution function (c.d.f.) of

, is a slowly-varying function and .

Domain of Attraction of Fréchet distribution includes a broad class of distributions such as Cauchy, Burr, Pareto and distributions. To facilitate algebraic derivation, we further assume that for slowly varying functions corresponding to and respectively, and as , where are two positive constants. This is a rather weak assumption with all the aforementioned distributions satisfying this condition. Since and can be incorporated into each , without loss of generality, we set and are both equal to 1 in the following. Under a dynamic model, we assume that the conditional tail indices and of and respectively evolve through time according to certain dynamics (e.g., (4) and (5)) and , .

We also assume that

and that

Notice here the supremum is taken over or with the number of latent factors fixed. This is a mild assumption and it includes all the commonly encountered factor models. For example, if the factor model takes a linear form, , a sufficient condition for the assumption to hold would be , where . We further assume that there exist positive constants and such that for any , and .

Based on Proposition 1 in [31], given , we have, as , ,

where , , , and denote the distributions of Fréchet type random variables with tail indices and , respectively.

Recall that . The limiting distribution form of needs some discussions about the size of two tail indices and the order of and .

Proposition 2.4.

Given , under the assumptions in this section, the limiting distribution of as can be determined in the following cases:

Case 1.

.

  • If , then and

  • If and , then

Case 2.

.

  • If and , then

  • If , then and

  • If , then and

Case 3.

.

  • If , then and

  • If and , then

The proof of Proposition 2.4 can be found in the Appendix.

Under a particular setup, we assume and denote as the underlying return values of the th stock, (or ) as the unobserved value of the th stock when the endopathic shock is stronger (weaker) than the exopathic shock. Under this setting, we can rewrite the observed time series as

Corollary 2.5 gives the general asymptotic conditional distribution of maxima when goes to infinity.

Corollary 2.5.

Denote , and for . Given , the limiting distribution of as can be determined in the following cases:

  • If , then , and

  • If , then , and

  • If , then , and

Both Proposition 2.4 and Corollary 2.5 show that under the framework of the general factor model and some mild conditions, the conditional distribution of maxima can be well approximated by an accelerate Fréchet distribution. In terms of stochastic representation, the observed maxima value can be rewritten as , where and are two independent unit Fréchet random variables and depends on the size of and . More specifically, if , then ; if , then ; if , then . To be more flexible and accurate in finite samples, a location parameter can be included. That is,

where are time-varying parameters. Setting for parsimonious modeling, we obtain the dynamic structure of specified in (2).

3 Parameter estimation

We denote all the parameters in the model by

and denote the parameter space by

In the following, we assume that all allowable parameters are in and the true parameter is .

The conditional probability density function (p.d.f.) of

given is

(9)

By conditional independence, the log-likelihood function with observations is

(10)

where can be obtained recursively through (3)-(5), with an initial value .

Denote the log-likelihood function based on an arbitrary initial value as . Theorems 3.1 and 3.2 show that there always exists a sequence , which is a local maximizer of , such that is consistent and asymptotically normal, regardless of the initial value .

Theorem 3.1 (Consistency).

Assume is a compact set of . Suppose the observations are generated by a stationary and ergodic model with true parameter and is in the interior of , then there exists a sequence of local maximizer of such that and , where , . Hence is consistency.

Theorem 3.1 shows that there exists a sequence which contains not only consistent cMLE to but also local maximizer of . Next, we derive the asymptotic distributions of our estimators in the following Theorem 3.2.

Theorem 3.2 (Asymptotic normality).

Under the conditions in Theorem 3.1, we have , where is given in Theorem 3.1 and is the Fisher Information matrix evaluated at . Further, the sample variance-covariance matrix of plug-in estimated score functions is a consistent estimator of .

Although the consistency of and their asymptotic distributions are showed in Theorem 3.1 and Theorem 3.2 respectively, the uniqueness of cMLE remains open due to the complexity brought by . Proposition 3.3 provides a segmentary answer to the uniqueness of cMLE.

Proposition 3.3 (Asymptotic uniqueness).

Denote where , under the conditions in Theorem 3.1, for any fixed . There exists a sequence of such that, , where with , and

The proofs of Theorems 3.1 and 3.2 and Proposition 3.3 can be found in the Appendix.

4 Simulation study

4.1 Performance of the conditional maximum likelihood estimator

In this section, we study the finite sample performance of the cMLE. We generate data from the AcAF model with the following parameters . This set of parameters is obtained from the real data analysis of the SP 500 daily negative log-returns using the AcAF model. Under this setting, the typical range of is , the typical range of is , and the typical range of is .

We investigate the performance of cMLE with sample sizes . For each sample size, we conduct 100 experiments. The results for parameter estimation are in Table 1

, including the average of the estimates and the standard deviation from the 100 experiments. From Table

1, we can see that both the bias and variance of the cMLE decrease as the sample size increases, demonstrating the consistency of the cMLE under correct model specification. We find that the performance of cMLE is already satisfactory when .

Parameter True value Mean S.D. Mean S.D. Mean S.D. Mean S.D.