1 Introduction
1.1 Stablecoins and Cryptocurrency
The rise of digital markets—whether with respect to the digitalization of equity markets in the early 2010’s or the increasing adoption of digital assets in the early 2020’s—always brings with it new dynamics that are important for market participants to identify and understand.
Stablecoins are digital assets, predominantly supported by public blockchain networks, which are designed with the purpose of maintaining a stable value relative to a reference asset. These reference assets are usually national currencies or commodities. Stablecoins are primarily used to facilitate trading and lending of other digital assets, as well as to move funds easily between digital asset platforms. In the U.S., most stablecoins are considered “convertible virtual currency” meaning that they serve as substitutes for a specific reference asset [10]. In the case where a stablecoin is not convertible for an underlying reference asset, it achieves its desired fixed value using a seigniorage algorithm, which buys and sells in the open market to maintain this value [12]. In either case, due to external market forces, it is easy to see that a stablecoin’s price may deviate considerably from its desired value—an event referred to as a depegging.
The use of stablecoins in the buying and selling of major cryptocurrencies has increased dramatically in recent years. For example, some blockchain research firms estimate that stablecoins, namely Tether (USDT), are involved in over 70 percent of Bitcoin (BTC) transactions (as per Fall of 2021)
[2]. Thus, during depegging events, we expect there to be ripple effects in the microstructure of cryptocurrency markets.1.2 Hawkes Process
We assume that stablecoin depeggings can be characterized as excitatory events. More rigorously, this means that an occurrence of a depegging increases the probability of another depegging in the nearterm. Additionally, since we are interested in modeling the interactions between stablecoin depeggings and disruption events in the price of a given cryptocurrency, we assume that the occurrence of a depegging also increases the probability of a disruption event in the price of a given cryptocurrency in the nearterm. Based on these assumptions, any model proposed for this phenomenon must be capable of capturing the above, as well as any potential effects on the stablecoin depeggings due to disruption events in the cryptocurrency price (although we anticipate these interactions to be fairly onesided).
Since we are interested in studying this interaction through the scope of ‘events’, i.e. depeggings and disruption events in cryptocurrency price (granular price jumps), we look to a class of stochastic point processes called counting processes. Given that we would like to capture excitatory dynamics between these two types of events, we employ an excitatory counting process that can be easily generalized to the multivariate case, which is of particular interest to us, called a Hawkes process. Originally used for applications in seismology, Hawkes processes have found numerous applications in highfrequency trading, social network theory, and many other fields [7]. These processes incorporate parameters that can capture excitation dynamics pertaining to both magnitude and duration of self and crossexcitation between different dimensions (types) of events.
For this reason, we choose to construct our model using a multivariate Hawkes process, which will be defined below.
Definition 1.1 (Stochastic Process).
Let be a probability space, with sample space , a algebra on , and probability measure . Additionally, let
be a realvalued random variable. Then we call the collection of random variables
a stochastic process.For a random variable defined on the space , we refer to a collection with the property iff as a filtration of . Furthermore, any specific is referred to as the filtration of X up to time t.
Definition 1.2 (Counting Process).
Let be an almost surely finite stochastic process where and its jump trajectories are rightcontinuous step functions with increments of one unit. Then, we call a counting process.
We say is the set of arrival times, where each is the time of the jump. With this notation, we provide an alternate formulation for random variables in a counting process:
Definition 1.3 (Conditional Intensity Function).
Take to be a counting process and to be the filtration of up to time . We define the conditional intensity function of as
For the class of counting processes we are interested in for this paper, our conditional intensity functions take the form
where is a background intensity constant and is an excitation function, also referred to as a kernel function.
Definition 1.4 (Multivariate Hawkes Process).
Consider the collection of counting processes with a set of arrival times for each counting process and a set of filtrations . Suppose that for some each counting process has the property
with conditional intensities
for some . Then, is a multivariate Hawkes process (also referred to as an variate Hawkes process).
2 Proposed Model
Related Work
There is a notable breadth of work revolving around applications of Hawkes processes to microstructures of asset markets, e.g. modeling order book dynamics [5]. There has also been work applying Hawkes processes to equity and cryptocurrency time series to capture self and crossexcitation dynamics [1], even generalizing these models to frameworks designed to predict market crashes [6]. However, to the best of our knowledge, there is littletono work utilizing Hawkes processes to model dynamics of the market microstructure between stablecoins and cryptocurrencies.
2.1 Model Construction
We begin by specifying magnitude thresholds for the stablecoin depeggings and cryptocurrency price disruption events that we want to track. Note that it may be prudent to use different thresholds for each type of event. We then track these events, recorded as the sets of arrival times , using a collection of counting processes , for depeggings and price jump events in the stablecoin and cryptocurrency respectively.
A brief note on notation: we use subscripts to denote whether a variable pertains to the time series of either the stablecoin or the cryptocurrency. In cases where variables represent crossbehavior between the two time series, we use the subscript , to denote an effect on the stablecoin time series from the cryptocurrency time series, and vice versa.
To allow this model to capture excitatory behavior between events in the two time series, we choose a common selfexciting kernel function, and apply it to the multivariate case. Let and , we define the selfexciting kernel function as
Here, captures the sensitivity of the conditional intensity function to new events in terms of magnitude, and captures information about how the conditional intensity function decays after an event. Applying this to the multivariate case of our model, for , we get
Note that this construction can be generalized to any number of stablecoins and cryptocurrencies, not just two.
2.2 LogLikelihood
The parameters for Hawkes process models are found through maximum likelihood estimation. We present the loglikelihood function for the general variate case [4], as the 2variate case of our model can be inferred from it.
Proposition 1.
Let be an variate Hawkes process with arrival times . Additionally, let
be a vector of model parameters such that
. Then, we define the loglikelihood function of to bewhere
with the initial condition that .
3 Numerical Example
To provide further insight into the model described above, as well as to illustrate some of the benefits and drawbacks of its use in practice, we apply it to an examination of the depegging dynamics between the stablecoin Tether (USDT), and the cryptocurrency Bitcoin (BTC).
3.1 Discussion of Data
Due to the fact that depeggings of stablecoins often occur rapidly and are swiftly corrected (either by a seigniorage algorithm or by buyers and sellers in the open market), this model is most appropriately applied to highly granular data. Thus, tick data summarized into oneminute increments is used for both USDT and BTC. Given that we are interested in price disruption events that occur within each minute, we use the for each minute interval, standardized into a percentage change.
The granularity of this data introduces computational challenges, and so for this analysis we focus on a specific period of interest—January 19, 2018—during which 96, percentile (calculated over the previous 5year period) depeggings of USDT occurred. The arrival times for these price jump events are formatted into hour units.
Another concern with respect to data selection and formatting has to do with the magnitude of price jump events in the BTC timeseries that correspond to (or are caused by) depeggings in USDT. Stablecoin depeggings are events that likely affect the micro structure of cryptocurrency markets. Since the market value of BTC is subject to a plethora of variables—many of which will likely have a significantly larger effect on BTC value than depeggings of USDT will have—we will fit our model to a range of percentiles of BTC price jump events throughout our period of examination, while only using percentile USDT depegging events (calculated from the data on the date of examination), and analyze the values of the parameters of our model fitted to each.
Given that January 19, 2018 was a fairly quiet day in terms of global and national news, we do not anticipate any notable bias in our sample toward outside events that may significantly impact our results.
3.2 Results
The estimates for the parameters of our model, for each percentile range of price jumps in the BTC time series, are largely as expected (see Appendix). We see that estimates for crossexcitation parameters based on our data sample are largely asymmetric. For lower percentiles of BTC price jumps, and . This would suggest that in our data sample, depeggings in USDT have a larger excitatory effect on BTC price jump events than BTC price jump events have on USDT depeggings as well as that a depegging in USDT has a longer duration excitation effect (the conditional intensity decays slower) on BTC price jump events than BTC price jump events have on USDT depeggings.
The above intuitive behaviors seem to waver at higher percentile ranges of our data. While further analysis is encouraged, this could point to behavior where BTC price jump events that may be caused by USDT depegging events may more commonly be of lower magnitudes; in the case of our data sample: the percentile range.
3.3 Optimization
In order to approximate the parameter values for the model, we must maximize its Likelihood function, or equivalently its logLikelihood function. We employed the NelderMead simplex algorithm [9] in doing so. While this method may converge more slowly than a gradient method, as well as may converge to nonstationary points in some instances, we felt this to be most appropriate for two reasons. The first is that calculating the logLikelihood function—let alone any of its derivatives—in conjunction with the sheer scale of our data, requires significant time and space complexity; so a procedure that simply recursively evaluates the function helps to lessen the computational burden. The second is that during optimization, domain constraints must be placed to ensure the range of the logLikelihood function remains real. While there are techniques to deal with this sort of issue in gradient optimization, it complicates the optimization procedure.
3.4 Remarks
Maximum likelihood estimation for Hawkes processes poses a particularly challenging problem computationally, as even a 2variate mutuallyexciting Hawkes process model relies on 10 individual parameters. In the most convenient case, where our model is fitted to two starkly similar time series, the generous assumptions of approximate symmetry in cross excitation (i.e. ) and approximate equivalence in background intensities could be justified. Even in this case, however, the model still relies on 7 individual parameters; and given the granularity of time series data that this model should be fit to, this requires significant computing power. Additionally, bootstrapping methods for initial exact parameter estimation that, for many other statistical models, are effective in decreasing computation time, are inappropriate for Hawkes processes as they rely on the entire history of the process [7]. In light of this, there are a few areas of interest for further exploration in terms of improved application of the proposed model.
The first has to do with the accuracy of parameter estimation. While the quantity of data involved in fitting these models is admittedly very large, to improve finite sample performance, a bootstrap inference method may be employed. Bootstrap inference procedures for Hawkes processes can consist of asymptotic confidence intervals, or in cases where arrival times occur on the interval
where is not sufficiently large, polyhedral confidence sets may be used [11]. There also include resampling techniques such as Fixed or RecursiveIntensity Bootstrap procedures [3].Another area of interest, which is more so relevant if one were to generalize this model to be variate for sufficiently large, includes methods for lowrank approximations of the kernel matrix [7]. This can significantly decrease runtime of maximum likelihood estimation procedures, as the dimensionality of the kernel matrix is collapsed with minimal loss of information. In the above case, we have low dimensionality (
) and the issue lies more in the amount of data and arrival times that need to be considered in parameter estimation. In this case, there are various expectationmaximization (EM) schemes
[8] that may be appropriate and worth further exploration.Acknowledgement
I would especially like to thank my advisor Liyang Zhang for his constant feedback, support, and encouragement toward creativity during this research. I would also like to thank Renato Mirollo, Paul Garvey, Elisenda Grigsby, and the Boston College Mathematics Department for enabling me to pursue this research and providing feedback at critical points of the process. Finally, I would like to thank my good friend Patrick Bjornstad for his suggestions regarding research direction and advice on convergence, runtime, and data formatting.
References
 [1] (2015) Hawkes processes in finance. Market Microstructure and Liquidity 1 (01), pp. 1550005. Cited by: §2.
 [2] (2021) Bitcoin pair market share. Kaiko. Cited by: §1.1.
 [3] (2022) Bootstrap inference for hawkes and general point processes. Journal of Econometrics. Cited by: §3.4.
 [4] (2016) Likelihood function for multivariate hawkes processes. Cited by: §2.2.
 [5] (2017) Modelling limit order book dynamics using hawkes processes. Ph.D. Thesis, The Florida State University. Cited by: §2.
 [6] (2015) Interpreting financial market crashes as earthquakes: a new early warning system for medium term crashes. Journal of Banking & Finance 56, pp. 123–139. Cited by: §2.

[7]
(2017)
Multivariate hawkes processes for largescale inference.
In
Proceedings of the AAAI conference on artificial intelligence
, Vol. 31. Cited by: §1.2, §3.4, §3.4.  [8] (2011) A nonparametric em algorithm for multiscale hawkes processes. Journal of Nonparametric Statistics 1 (1), pp. 1–20. Cited by: §3.4.
 [9] (1975) The neldermead simplex procedure for function minimization. Vol. 17, Taylor & Francis. Cited by: §3.3.
 [10] (2021) Report on stablecoins. U.S. Treasury Department. Cited by: A Multivariate Hawkes Process Model for StablecoinCryptocurrency Depegging Event Dynamics, §1.1.
 [11] (2020) Uncertainty quantification for inferring hawkes networks. Advances in Neural Information Processing Systems 33, pp. 7125–7134. Cited by: §3.4.
 [12] (2021) Understand volatility of algorithmic stablecoin: modeling, verification and empirical analysis. In International Conference on Financial Cryptography and Data Security, pp. 97–108. Cited by: §1.1.
Appendix
Appendix A
Parameter Estimates
Percentile Range  logLikelihood  

0.00.1  318.88  6.331  0.000  0.000  4.920 
0.10.2  288.02  6.180  0.018  0.040  1.653 
0.20.3  280.83  6.139  0.000  0.170  1.502 
0.30.4  262.59  5.167  0.030  0.094  2.808 
0.40.5  275.00  5.062  0.000  0.000  7.649 
0.50.6  339.13  3.467  0.000  0.000  5.535 
0.60.7  292.60  3.321  0.062  0.001  8.841 
0.70.8  296.92  3.293  0.000  0.000  7.160 
0.80.9  307.83  3.293  0.000  0.000  5.782 
0.91.0  382.86  3.688  0.000  0.000  9.611 
Percentile Range  

0.00.1  9.141  7.242  9.192  6.119 
0.10.2  8.671  6.256  0.000  5.188 
0.20.3  8.554  5.504  1.259  5.382 
0.30.4  8.572  1.168  0.286  3.393 
0.40.5  8.409  10.0+  8.863  9.993 
0.50.6  6.649  6.574  2.509  5.928 
0.60.7  6.577  8.137  9.929  9.457 
0.70.8  6.536  9.999  9.616  7.611 
0.80.9  6.532  8.131  1.660  5.942 
0.91.0  6.705  8.081  2.466  10.0+ 
Percentile Range  

0.00.1  1.392  3.566 
0.10.2  1.320  8.561 
0.20.3  1.271  9.326 
0.30.4  1.285  0.771 
0.40.5  1.531  2.324 
0.50.6  1.585  0.419 
0.60.7  1.607  0.398 
0.70.8  1.650  0.362 
0.80.9  1.648  0.164 
0.91.0  1.510  0.267 
Appendix B
Maximum Likelihood Estimation Code (Mathematica)
Matrix of arrival times for each time series (BTC and USDT).
The below code is broken out explicitly for the 2variate case.
Note that runtime of this construction is by no means maximallyefficient.
Begin by specifying the recursive functions in the logLikelihood equation.
Execute minimization procedure with domain constraints.
Specify NelderMead optimization algorithm.