MicrostructureNoise.jl
Nonparametric Bayesian volatility learning under microstructure noise
view repo
Aiming at financial applications, we study the problem of learning the volatility under market microstructure noise. Specifically, we consider noisy discrete time observations from a stochastic differential equation and develop a novel computational method to learn the diffusion coefficient of the equation. We take a nonparametric Bayesian approach, where we model the volatility function a priori as piecewise constant. Its prior is specified via the inverse Gamma Markov chain. Sampling from the posterior is accomplished by incorporating the Forward Filtering Backward Simulation algorithm in the Gibbs sampler. Good performance of the method is demonstrated on two representative synthetic data examples. Finally, we apply the method on the EUR/USD exchange rate dataset.
READ FULL TEXT VIEW PDF
Given discrete time observations over a fixed time interval, we study a
...
read it
We study a nonparametric Bayesian approach to estimation of the volatili...
read it
We develop a Bayesian nonparametric extension of the popular Plackett-Lu...
read it
In this paper, We propose a new style panel data factor stochastic volat...
read it
Local volatility is an important quantity in option pricing, portfolio
h...
read it
In this paper, we introduce a new single model maneuvering target tracki...
read it
Array-RQMC has been proposed as a way to effectively apply randomized
qu...
read it
Nonparametric Bayesian volatility learning under microstructure noise
Let the one-dimensional stochastic differential equation (SDE)
(1) |
be given. Here is a standard Wiener process and and are referred to as the drift function and volatility function respectively. We assume (not necessarily uniformly spaced) observation times and observations where
(2) |
and
is a sequence of independent and identically distributed random variables, independent of
. Our aim is to nonparametrically learn the volatility using the noisy observations . Knowledge of the volatility is of paramount importance in financial applications, specifically in pricing financial derivatives, see, e.g., Musiela and Rutkowski (2005), and in risk management.The quantity is referred to as the observation frequency; small values of correspond to high frequency, dense-in-time data. Intraday financial data are commonly thought to be high frequency data. In this high frequency financial data setting, which is the one we are interested in in this work, the measurement errors are referred to as microstructure noise. Their inclusion in the model aims to reflect such features of observed financial time series as their discreteness or approximations due to market friction. Whereas for low-frequency financial data these can typically be neglected without much ensuing harm, empirical evidence shows that this is not the case for high frequency data; cf. Mykland and Zhang (2012).
There exists a large body of statistics and econometrics literature on nonparametric volatility estimation under microstructure noise. See, e.g.,
Hoffmann et al. (2012), Jacod et al. (2009), Mykland and Zhang (2009), Reiß (2011), Sabel et al. (2015); a recent overview is Mykland and Zhang (2012). The literature predominantly deals with estimation of the integrated volatility , although inference on has also been studied in several of these references. Various methods proposed in the above-mentioned works share the property of being frequentist in nature.In this paper, our main and novel contribution is the development of a practical nonparametric Bayesian approach to volatility learning under microstructure noise. More specifically, we propose a novel Bayesian approach to volatility learning and demonstrate good performance of our method on two representative simulation examples. We also apply our method on the EUR/USD exchange rate dataset.
In general, a nonparametric approach reduces the risk of model misspecification; the latter may lead to distorted inferential conclusions. The presented nonparametric method is not only useful for an exploratory analysis of the problem at hand (cf. Silverman (1986)), but also allows honest representation of inferential uncertainties (cf. Müller and Mitra (2013)
). Attractive features of the Bayesian approach include its internal coherence, automatic uncertainty quantification in parameter estimates via Bayesian credible sets, and the fact that it is a fundamentally likelihood-based method. For a modern monographic treatment of nonparametric Bayesian statistics see
Ghosal and van der Vaart (2017); an applied perspective is found in Müller et al. (2015).The paper is organised as follows: in Section 2 we introduce in detail our approach. In Section 3 we test its practical performance on synthetic data examples. Section 4 applies our method on a real data example. Section 5 summarises our findings. Finally, Appendices A, B, C and D contain the frequently used notation and give further implementational details.
In this section we introduce our methodology for inferring the volatility. We first recast the model into a simpler form amenable to computational analysis, next specify a nonparametric prior on the volatility, and finally describe an MCMC method for sampling from the posterior.
Let . By equation (1), we have
(3) |
We derive our method under the assumption that the “true”, data-generating volatility is a deterministic function of time . Next, if the “true” is in fact a stochastic process, we apply our procedure without further changes, as if were deterministic. As shown in the example of Subsection 3.2, this works in practice.
Over short time intervals , the term in (3), roughly speaking, will dominate the term , as the former scales as , whereas the latter as (due to the properties of the Wiener process paths). As our emphasis is on learning rather than , following Gugushvili et al. (2017), Gugushvili et al. (2018a) we act as if the process had a zero drift, . A similar idea is often used in frequentist volatility estimation procedures in the high frequency financial data setting; see Mykland and Zhang (2012), Section 2.1.5 for an intuitive exposition. Formal results why this works in specific settings rely on Girsanov’s theorem, see, e.g., Hoffmann et al. (2012), Mykland and Zhang (2009), Gugushvili et al. (2017). Further reasons why one would like to set are that is a nuisance parameter, in specific applications its appropriate parametric form might be unknown, and finally, the observed time series might simply be not long enough to learn consistently (see Ignatieva and Platen (2012)).
We thus assume where Note that then
(4) |
and also that is a sequence of independent random variables. To simplify our notation, write , , , . The preceding arguments and (2) allow us to reduce our model to the linear state space model
(5) |
where . The first equation in (5) is the state equation, while the second equation is the observation equation. We assume that is a sequence of independent distributed random variables, independent of the Wiener process in (1), so that is independent of . For justification of such assumptions on the noise sequence from a practical point of view, see Sabel et al. (2015), page 229. We endow the initial state with the prior distribution. Under our assumptions, (5) is a Gaussian linear state space model. This is very convenient computationally and had we not followed this route, we would have had to deal with an intractable likelihood, which constitutes the main computational bottleneck
for Bayesian inference in SDE models; see, e.g,
van der Meulen and Schauer (2017) and Papaspiliopoulos et al. (2013) for discussion.For the measurement error variance
, we assume a priori . The construction of the prior for is more complex and follows Gugushvili et al. (2018a), that in turn relies on Cemgil and Dikmen (2007). Fix an integer . Then we have a unique decomposition with , where . Now define bins , , and . We model as(6) |
where
(the number of bins) is a hyperparameter. Then
where . We complete the prior specification for by assigning a prior distribution to the coefficients . For this purpose, we introduce auxiliary variables , and suppose the sequence forms a Markov chain (in this order of variables). The transition distributions of this chain are defined by(7) |
where are hyperparameters. We refer to this chain as an inverse Gamma Markov chain, see Cemgil and Dikmen (2007). The corresponding prior on will be called the inverse Gamma Markov chain (IGMC) prior. The definition in (7) ensures that are positively correlated, which imposes smoothing across different bins . Simultaneously, it ensures partial conjugacy in the Gibbs sampler that we derive below, leading to simple and tractable MCMC inference. In our experience, an uninformative choice performs well in practice. We also endow with a prior distribution and assume , with hyperparameters
chosen so as to render the hyperprior on
diffuse. As explained in Gugushvili et al. (2017), Gugushvili et al. (2018a), the hyperparameter (or equivalently ) can be considered both as a smoothing parameter and the resolution at which one wants to learn the volatility function. Obviously, given the limited amount of data, this resolution cannot be made arbitrarily fine. On the other hand, as shown in Gugushvili et al. (2018a) (see also Gugushvili et al. (2018b)), inference with the IGMC prior is quite robust with respect to a wide range of values of , as the corresponding Bayesian procedure has an additional regularisation parameter that is learned from the data. Statistical optimality results in Munk and Schmidt-Hieber (2010) suggest that in our setting should be chosen considerably smaller than in the case of an SDE observed without noise (that was studied via the IGMC prior in Gugushvili et al. (2018a)).Although an expression for the posterior of can be written down in closed form, it is not amenable to computations. This problem is alleviated by following a data augmentation approach, in which are treated as missing data; cf. Tanner and Wong (1987). An expression for the joint density of all random quantities involved is easily derived from the prior specification and (5). We have
Except for , all the densities have been specified directly in the previous subsections. To obtain an expression for the latter, define
and set for , and . Then
We use the Gibbs sampler to sample from the joint distribution of
. The full conditionals of , , are easily derived from Section 2.3 and recognised to be of the inverse Gamma type. The parameter can be updated via a Metropolis-Hastings step. For updating , conditional on all other parameters, we use the standard Forward Filtering Backward Simulation (FFBS) algorithm for Gaussian state space models (cf. Section 4.4.3 in Petris et al. (2009)). The resulting Gibbs sampler is summarised in Algorithm 1. For details, see Appendix B.In this section we study the practical performance of our method on challenging synthetic data examples. In each of them we take the time horizon and generate observations as follows. First, using a fine grid of time points which are sampled from the distribution, conditional on including 0 and 1, a realisation of the process is obtained via Euler’s scheme. The time points are then taken as a random subsample of those times, conditional on including . The settings used for the Gibbs sampler are given in Appendix C. In each example below, we plot the posterior mean and marginal credible band (obtained from the central posterior intervals for the coefficients in (6)). The former gives an estimate of the volatility, while the latter provides a means for uncertainty quantification. All the computations are performed in the programming language Julia, see Bezanson et al. (2017), and we provide the code used in our examples, see Schauer and Gugushvili (2018). The hardware and software specifications for the MacBook used to perform simulations are: CPU: Intel(R) Core(TM) M-5Y31 CPU @ 0.90GHz; OS: macOS (x86_64-apple-darwin14.5.0).
Suppose the volatility function is given by
(8) |
This is a popular benchmark in nonparametric regression, which we call the Fan & Gijbels function (see Fan and Gijbels (1995)). This function was already used in the volatility estimation context in Gugushvili et al. (2017). We generated data using the drift coefficient . For the noise level we took , which is substantial. Given the general difficulty of learning the volatility from noisy observations, one cannot expect to infer it on a very fine resolution (cf. the remarks in Subsection 2.2), and thus we opted for bins.
Inference results are reported in Figure 1. It can be seen from the plot that we succeed in learning the volatility function. Note that the credible band does not cover the entire volatility function, but this had to be expected given that this is a marginal band. Quick mixing of the Markov chains can be visually assessed via the trace plots in Figure 5 in Appendix D. The algorithm took ca. 11 minutes to complete.
The Heston model (see Heston (1993)) is a widely used stochastic volatility model where it is assumed that the price process of a certain asset, denoted by , evolves over time according to the SDE
where the process follows the CIR or square root process (see Cox et al. (1985)),
Here and are correlated Wiener processes with correlation . Following a usual approach in quantitative finance, we work not with directly, but with its logarithm (according to Itô’s formula it obeys a diffusion equation with volatility ). We assume high-frequency observations on the log-price process with additive noise . In this setup the volatility is a random function. We assume no further knowledge of the data generation mechanism. This setting is extremely challenging and puts our method to a serious test. To generate data, we used the parameter values , that mimmick the ones obtained via fitting the Heston model to real data (see Table 5.2 in van der Ploeg (2006)). Furthermore, the noise variance was taken equal to The latter choice ensures a sufficiently large signal-to-noise ratio in the model (5), that can be quantified via the ratio of the intrinsic noise level to the external noise level . Finally, the parameter choice corresponds to a log-return rate, which is a reasonable value.
We report inference results with bins in Figure 2. These are surprisingly accurate, given a general difficulty of the problem and the amount of assumptions that went into the learning procedure. The Markov chains mix rapidly, as evidenced by the trace plots in Figure 6 in Appendix D. The simulation run took ca. 12 minutes to complete. Finally, we note that the Heston model does not formally fall into the framework under which our Bayesian method was derived (deterministic volatility function ). We expect that a theoretical validation why our approach also works in this case requires the use of intricate technical arguments, which lie beyond the scope of the present, practically-oriented paper.
Unlike daily stock quotes or exchange rate series that can easily be obtained via several online resources (e.g., Yahoo Finance), high frequency financial data are rarely accessible for free for academia. In this section, we apply our methodology to infer volatility of the high frequency foreign exchange rate data made available by Pepperstone Limited, the London based forex broker.^{1}^{1}1See https://pepperstone.com/uk/client-resources/historical-tick-data (Accessed on 25 April 2018). The data are stored as csv files, that contain the dates and times of transactions and bid and ask prices. Specifically, we use the EUR/USD tick data (bid prices) for 2 March 2015. As the independent additive measurement error model (2) becomes harder to justify for highly densely spaced data, we work with the subsampled data, retaining every th observation; subsampling is a common strategy in this context. This results in a total of observations over one day. As in Subsection 3.2, we apply the log-transformation on the observed time series, and assume the additive measurement error model (2). The data are displayed in Figure 3.
Financial time series often contain jumps, accurate detection of which is a delicate task. As this section serves illustrative purposes only, for simplicity we ignore jumps in our analysis. For high frequency data, volatility estimates are expected to recover quickly after infrequent jumps in the underlying process. This should in particular be the case for our learning procedure, given that we model the volatility as a piecewise constant function, which can be viewed as an estimate of the “histogramised” true volatility. Indeed, effects of infrequent jumps in the underlying process are likely to get smoothed out by averaging (we make no claim of robustness of our procedure with respect to possible presence of large jumps in the process
, or densely clustered small jumps). An alternative here would have been to use a heuristic jump removal technique, such as the ones discussed in
Sabel et al. (2015); next one could have applied our Bayesian procedure on the resulting “cleaned” data.We report inference results in Figure 3, where time is rescaled to the interval , so that corresponds to the midnight and to noon. As seen from the plot, there is considerable uncertainty in volatility estimates. Understandably, the volatility is lower during the night hours. It also displays two peaks corresponding to the early morning and late afternoon parts of the day. Finally, we give several trace plots of the generated Markov chains in Figure 7. The algorithm took ca. 33 minutes to complete. In Figure 4 we give inference results obtained via further subsampling of the data, retaining of the observations. The posterior mean is quite similar to that in Figure 3, whereas the wider credible band reflects greater inferential uncertainty due to a smaller sample size. The figure provides a partial validation of the model we use.
In this paper we studied a practical nonparametric Bayesian approach to volatility learning under microstructure noise. From the statistical theory point of view, the problem is much more difficult than volatility learning from noiseless observations. Hence, accurate inference on volatility under microstructure noise requires large amounts of data, which, fortunately, are available in financial applications. On the other hand, it becomes important to devise a learning method that scales well with the size of the data. This is indeed the case for the technique we study in this work. Our prior specification and a deliberate, but asymptotically harmless misspecification of the drift by taking are clever choices which enable to combine our earlier work in Gugushvili et al. (2018a) with FFBS algorithm for Gaussian linear state space models. This directly gives a conceptually simple algorithm (Gibbs sampler) to obtain samples from the posterior, from which measures of inferential uncertainty can be extracted in a straightforward way. Given the promising results we have obtained, we plan to further pursue the current research direction for exploring for example online Bayesian learning of the volatility. This may require the use of sequential Monte Carlo methods (see, e.g., Cappé et al. (2007)). Another very interesting topic is to explicitly account for the possible presence of jumps in financial time series within the statistical model.
We denote the inverse Gamma distribution with shape parameter
and scale parameter by . Its density isBy
we denote a normal distribution with mean
and variance. The uniform distribution on an interval
is denoted by For a random variate , the notation stands for the fact that is distributed according to a density , or is drawn according to a density . Conditioning of a random variate on a random variate is denoted by . By we denote the integer part of a real number . The notation for a density denotes the fact that a positive function is an unnormalised density corresponding to : can be recovered from as . Finally, we use the shorthand notation .We first describe how to draw the state vector
conditional on all other parameters in the model and the data . Note that for in (5) we have by (4) that , where(9) |
By equation (4.21) in Petris et al. (2009), write (we omit dependence on in our notation, as they stay fixed in this step)
where the factor with in the product on the righthand side is the filtering density . This distribution is in fact , with the mean and variance obtained from Kalman recursions
Here
is the Kalman gain. Furthermore, is the one-step ahead prediction error. See Petris et al. (2009), Section 2.7.2. This constitutes the forward pass of the FFBS.
Next, in the backward pass, one draws backwards in time and from the densities for . It holds that , and the latter distribution is , with
For every , these expressions depend on a previously generated and other known quantities only. The sequence is a sample from . See Section 4.4.1 in Petris et al. (2009) for details on FFBS.
Using the likelihood expression from Subsection 2.3 and the fact that , one sees that the full conditional distribution of is given by
Settings for the Gibbs sampler we used in Section 3 are as follows: we used a vague specification , and also assumed that and (for the Heston model we used the specification ). Furthermore, we set . The Metropolis-within-Gibbs step to update the hyperparameter was performed via an independent Gaussian random walk proposal (with a correction as in Wilkinson (2012)) with scaling to ensure the acceptance rate of ca. . The Gibbs sampler was run for iterations, with the first third of the samples dropped as burn-in.
This appendix gives several trace plots for data examples we analysed in the main body of the paper.
Stochastic Modelling and Applied Probability
. Springer-Verlag, Berlin, 2 edition, 2005. ISBN 3-540-20966-2.
Comments
There are no comments yet.