DeepAI
Log In Sign Up

Multiresolution Signal Processing of Financial Market Objects

10/28/2022
by   Ioana Boier, et al.
Nvidia
0

Financial markets are among the most complex entities in our environment, yet mainstream quantitative models operate at predetermined scale, rely on linear correlation measures, and struggle to recognize non-linear or causal structures. In this paper, we combine neural networks known to capture non-linear associations with a multiscale decomposition approach to facilitate a better understanding of financial market data substructures. Quantization keeps our decompositions calibrated to market at every scale. We illustrate our approach in the context of a wide spectrum of applications.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/12/2023

Non-linear correlation analysis in financial markets using hierarchical clustering

Distance correlation coefficient (DCC) can be used to identify new assoc...
12/31/2021

Macroeconomic and financial management in an uncertain world: What can we learn from complexity science?

This paper discusses serious drawbacks of existing knowledge in macroeco...
03/31/2020

A wavelet analysis of inter-dependence, contagion and long memory among global equity markets

This study attempts to investigate into the structure and features of gl...
02/28/2019

A numerical scheme for the quantile hedging problem

We consider the numerical approximation of the quantile hedging price in...
11/26/2020

Predicting S P500 Index direction with Transfer Learning and a Causal Graph as main Input

We propose a unified multi-tasking framework to represent the complex an...
01/03/2019

The market nanostructure origin of asset price time reversal asymmetry

We introduce a framework to infer lead-lag networks between the states o...
08/08/2019

Managing the Complexity of Processing Financial Data at Scale – an Experience Report

Financial markets are extremely data-driven and regulated. Participants ...

1 Introduction

Financial markets are prototypical examples of systems exhibiting multiple scales of behavior. Signal processing (SP) techniques have been adapted from engineering to finance in search of excess returns. Their main challenge lies in dealing with the high dimensionality of the data [1]

. Machine learning (ML) applied to finance uses pattern learning paradigms closely connected to traditional statistical and numerical approaches

[1]. ML has its own challenges: financial data is highly non-stationary, noisy, and in most cases, insufficient given the high dimensionality of the space to which it belongs.

Figure 1: Multiresolution decomposition of a US swap curve.

One way to reduce dimensionality is to consider the intrinsic structure present in the data. For instance, bond yields, swap rates, inflation, and foreign exchange (FX) rates can be thought of as having D term-structures (e.g., zero-coupon, spot, forward, or basis curves). Similarly, volatilities implied by option prices are organized as (hyper-)surfaces in D or higher. These structures carry information that can help reduce complexity. A typical spot swap curve could have as many as fifty tenors. Studying the corresponding time series means working in D space. A volatility grid puts us in D space. Knowing that these data lie, in fact, on D manifolds where is relatively small, greatly reduces the burden of the learning task. We refer to these market data structures as market objects and focus on learning market behaviors from the shape and dynamics of these representations.

We propose a multiresolution decomposition

of market objects generated with a novel architecture (FinQ-VAE) consisting of a pipeline of variational autoencoders (VAE) with latent space quantizations guided by

financially meaningful constraints, e.g., market liquidity or trading views. Figure 1 shows a learned multiresolution decomposition of the US swap curve of March , .

2 Related Work

Financial data, i.e., time series of prices, are typically non-stationary [5]. Hence, most SP and ML approaches operate on returns, i.e., changes in price from one time stamp to the next. A time series of returns is typically more “stable” and likely to pass stationarity tests [20]. However, financial data also suffers from

low signal-to-noise ratios

. Hence, any signal found is wiped out by differencing. Multiresolution methods can be traced all the way back to Fourier transforms and wavelets

[8, 6, 14]. They balance signal preservation with stationarity: a base shape acts as a noise smoother, retaining an average signal level, and a hierarchy of residuals adds refinements to the base shape while exhibiting desirable statistical properties like stationarity. In finance, multiscale approaches have primarily focused on scaling along the time dimension (minute-by-minute, daily, weekly, etc.).

We propose a novel way of learning multiresolution decompositions of financially meaningful sub-structures in the data. The inspiration comes from multiresolution geometry processing techniques for fairing and interactive editing [4]. The analogy with finance lies not only in the need to disentangle or denoise a fair shape from a noisy representation, but also in the need for scenario generation through controlled movements of key points. In modeling physical interactions, the type of material dictates the influence of a single edit on the surrounding shape. In modeling financial scenarios, the influence of certain points on the dynamics of their neighbors is subject to financial constraints. Points on a yield curve or on a volatility surface don’t move in isolation. They are correlated to points around them. The region of influence of such move depends on the use case. Some moves have far-reaching implications, others are more localized. The nature of the deformation is also constrained by laws of arbitrage and financial plausibility.

Autoencoders with their variational and conditional flavors [16, 11, 17] have been adopted in finance mostly for single-resolution latent learning and its applications [12, 2, 19, 18, 13, 7]. We extend these ideas in two significant ways: (a) we define a new architecture of cascading VAEs to learn hierarchical decompositions of market objects and we illustrate how to leverage them in a variety of applications and (b) we introduce a quantization step [15] that takes into account financially meaningful constraints to ensure calibration to market at every scale.

3 FinQ-VAE

3.1 Modeling Background

VAEs facilitate the learning of probabilistic generative models from a set of observations in presumed to lie on a manifold of dimension

. The loss function to be optimized during training is according to

[10]:

With its roots in signal processing theory, quantization

helps compress continuous sets of values to a countable or even finite number of options beyond which the variability in values makes little or no contribution to modeling. For example, images can be viewed as containing discrete objects along with a discrete set of qualifiers such as color, shape, texture. Vector-quantized VAEs (VQ-VAE) have been developed to support discrete learning

[15, 21]

. Market objects also contain redundant information. Moreover, like higher-level objects in images, they are, by definition, discrete collections of sub-objects with qualifiers such as steepness, curvature, skews, smiles, liquid and illiquid regions. Therefore, it makes sense to consider the benefits of quantizing their latent spaces. By default, however, VAE outputs are not faithful to specific data. In finance it is important to calibrate models to

market-observed data or to enforce desired constraints. Instead of using a full-fledged VQ-VAE, we employ a simpler, non-learned quantization process that snaps encoder outputs to optimal latent locations via optimization with constraints. These dynamically quantized points are passed to the decoder to produce calibrated reconstructions which are used to compute residuals for the next layer. Consequently, our multiresolution decompositions are calibrated to market at each scale.

3.2 Problem Statement

Given a market object , our goal is to learn a multiresolution decomposition such that:

  1. It consists of a base shape object and a number of residual layers, and objects such that .

  2. Anchor points are selected on each of the base and residual layers : , where is the index of the anchor and is the layer index. The selection criteria reflect financial considerations: e.g., some points are more liquid or tradeable.

  3. Each intermediate reconstruction with layers is respectively calibrated to anchor points on . The residuals in the last layer are computed to recover the input market object exactly: .

Figure 2 shows examples of user-defined anchors distributed according to market liquidity. We note that this technique applies to market objects of different dimensions with scattered constraints that need not be regularly spaced.

(a) Swap curve (b) FX forward curve (b) Swaption volatility surface
Figure 2: User-specified anchor points for (a) a swap curve, (b) an FX forward curve, (c) a swaption volatility surface.

3.3 The FinQ-VAE Architecture

Our FinQ-VAE architecture is shown in Figure 3. Without loss of generality, we illustrate it in the context of swap curves, but the same concept applies to other types of market objects. The training inputs are market objects and anchors (Figure 1 (a)). Each layer is a VAE neural net, consisting of an Encoder, a Decoder, and a latent space that entails a quantization that maps encoded latent vectors into constraint-optimized vectors to be passed to the Decoder for reconstruction.

The base layer learns a coarse general shape (Figure 1 (b)). In this example, the takes as input the full swap curve specified by a set of swap tenors and a set of base anchor points, e.g., Y, Y, Y, Y (Figure 2). The output of is a latent vector in a latent space of fixed dimension. In this example we used a D latent space. Without quantization, maps the latent vector into a base output curve , i.e., a reconstruction representing the “learned” global shape.

This representation can be likened to a PCA reconstruction using the same number of principal components. Unlike a PCA output, the VAE-generated curves are better fitted to anchors by design. We have modified the VAE loss function to include an anchor calibration term:

where and the anchors serve as constraints.

While the reconstructed base curve fits the overall shape, we would like the anchor points to be fitted even better. We quantize the latent vector to via optimization:

(1)

where is the layer index. The base curve reconstructed from the quantized vector is shown in Figure 1 (b). Figure 4 shows curves with the same base shape before and after quantization. Given a set of artificially generated input curves passing through the same anchors in (a), the corresponding embeddings of their base shapes in D latent space are shown in (b): blue points are the embeddings before quantization, orange points (overlapping) are the embeddings after quantization. The base curve reconstructions without quantization are not well calibrated to the anchors as shown in (c). Reconstructions with quantization produce a well-fitted base shape in (d).

The first residual layer takes as input the difference object and another VAE is trained to learn the residuals. The resulting quantized residual is applied to the base object to produce our market object reconstruction: , see Figure 1 (c).

The second residual layer takes as input the difference object and the process is repeated. Figures 1 (d)-(e) show the reconstructions on layers and . The final residuals and computed to recover the input exactly (Figure 1 (f)).

Figure 3: FinQ-VAE with financially quantized latent spaces.
Figure 4: Reconstructions w/ and w/o quantization.

4 Applications and Results

Objects reconstructed with FinQ are not only plausible as in [9], but also calibrated to market. The choice of anchors at different scales is fully configurable. We trained two FinQ models: one on daily USD spot swap curves and one on bond curves [3] between Jan and the end of . The test data period is Jan to Jul . The model appears robust to high stress market conditions like COVID-19 and the latest yield curve inversion.

Hierarchical Factor Analysis

In contrast to PCA which typically operates on returns and ignores market levels, our multiresolution decomposition learns a global shape level on its base layer and residuals at various scales. The variational feature of the model ensures smooth navigation through latent space. The quantization feature ensures that outputs are calibrated to user-specified important points. The latter is a powerful property, as traditionally, VAEs have only been used in finance for generating “similar” data to some learned distribution that could be rather different from market-observed values. Figure 5 illustrates the hierarchy of latent spaces in our learned model.

(a) Base shape (D) (b) residuals (D) (c) residuals (

D) w/ outlier

(b) residuals (D)
Figure 5: Hierarchy of latent spaces (color-coded by year).

Scenario Generation

Standard practice in financial risk management is to shock market objects either in absolute or percentage terms. There are two main considerations: the shape of a scenario and its overall size. Traditional techniques to generate plausible scenarios revolve around historical deformations and/or artificial stresses. Unfortunately, history doesn’t often repeat itself and the generation of artificial scenarios is rather empirical: one may have a view of the movement in certain regions of the market, but the dependence of the rest of the market to movements in those regions may be difficult to ascertain. PCA-based techniques are appealing because of their simplicity, however proper calibration of scenario size is challenging and global dependence on non-intuitively weighted linear combinations of points is difficult to interpret.

Our multiresolution framework splits responsibilities: base shapes account for market levels and are under the control of a few key drivers that are easier to intuit by human experts. Their views define anchor points and desired stresses. Dependencies that are more difficult to synthesize through human experience are generated algorithmically (Figure 6).

(a) Y point moves up by bp. (b) Y point moves down by bp. (c) Y point moves down by bp.
Figure 6: Full curve scenarios conditional on user moves. Coarse-to-fine anchor points have global-to-local impacts.

Synthetic Data Generation

Synthetic market objects are composable in hierarchical fashion. This can be done artificially by sampling the latent spaces of the VAEs from coarse to fine, or by using historical or trader-specified moves of anchor points for the lower layers in the hierarchy and randomly sampling the higher ones. Figure 7 illustrates synthetic samples on each level.

(a) base curves (b) residuals (c) residuals (d) residuals
Figure 7: Synthetic samples generated at various scales.

Nowcasting

Nowcasting is the “forecasting” of the present or the near future in the absence of complete information about the current state of the market. It has two main components: an understanding of what is already priced in the market and a view on future conditions. We allow for such views to be incorporated. For example, option portfolios require full implied volatility surfaces to price, yet option prices may not be available for all pairs. The most liquid points can be used as anchors, while missing values can be sampled from the latent distributions. Unlike conditional models, our approach does not require training with conditional labels.

Residuals as Signals

Residual time series could be used as signals for systematic strategies. anchor points should have residuals that are close to zero on . If this is not the case, their deviation from zero may be used as a signal that the shapes being reconstructed are difficult to fit to the constraints. Such difficulties are harbingers of unusual market conditions.

Figure 8 depicts residuals of the Y swap rate with respect to the FinQ-generated curves (vertical line is the boundary between train/test data). The residuals are small with respect to the reconstruction since Y is an anchor on . They are larger, but mean reverting vs. coarser reconstructions and , respectively.

Figure 8: Y residuals vs. reconstructions.

Relative Value Analysis

We applied FinQ to learning US Treasury bond curves. While bonds and interest rate swaps capture similar macroeconomic developments, swap curves tend to be smoother. Hence, swap curve reconstructions could be used to identify viable swap spread trades. Figure 9 indicates that buying Y bonds and selling the same maturity swaps might be a good strategy.

Figure 9: Relative value analysis of bond yields vs. swap rates.

Outlier Detection

The variational aspect of autoencoders ensures relatively compact clusters of latent encodings. Samples that stand out from their clusters are likely to be outliers. Figure 5 (c) singles out such a point corresponding to May 6, ; upon closer inspection of the data, we notice that on this date the Y value seems stale, causing a non-smooth curve compared the learned shapes (the “flash crash” occurred intraday, which may have contributed to noise in the end-of-day data).

5 Conclusions

FinQ-VAE is a novel architecture for multiresolution signal processing of market objects. Market-calibrated representations are learned using a layered approach. User-specified constraints can be incorporated at different scales to generate quantized embeddings that lead to calibrated reconstructions of market objects. To our knowledge, this is the first time multiresolution analysis is combined with quantized VAEs and applied to financial modeling in a way that accommodates constraints such as trading views, liquidity, etc. We showed that the resulting decompositions could serve in a variety of use cases. Our technique applies across asset classes and different dimensionality structures.

References

  • [1] A. N. Akansu, S. R. Kulkarni, and D. M. Malioutov (2016) Financial signal processing and machine learning. John Wiley & Sons. Cited by: §1.
  • [2] M. Bergeron, N. Fung, J. Hull, Z. Poulos, and A. Veneris (2022) Variational autoencoders: a hands-off approach to volatility.

    J. of Financial Data Science

    4 (2), pp. 125–138.
    Cited by: §2.
  • [3] Board of Governors of the Federal Reserve System (US) Fitted yield from fred, federal reserve bank of st. louis. Note: https://fred.stlouisfed.org/series/THREEFY5 Cited by: §4.
  • [4] I. Boier-Martin, R. Ronfard, and F. Bernardini (2005) Detail-preserving variational surface design with multiresolution constraints. Cited by: §2.
  • [5] R. Cont (2001) Empirical properties of asset returns: stylized facts and statistical issues. Quantitative finance 1 (2), pp. 223. Cited by: §2.
  • [6] I. Daubechies (1988) Time-frequency localization operators: a geometric phase space approach. IEEE Trans. on Infor. Theory 34 (4), pp. 605–612. Cited by: §2.
  • [7] S. Gu, B. Kelly, and D. Xiu (2021) Autoencoder asset pricing models. Journal of Econometrics 222 (1), pp. 429–450. Cited by: §2.
  • [8] A. Haar (1909) Zur theorie der orthogonalen funktionensysteme. Georg-August-Universitat, Gottingen.. Cited by: §2.
  • [9] P. Henry-Labordere (2019) Generative models for financial data. Available at SSRN 3408007. Cited by: §4.
  • [10] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner (2017) Beta-VAE: learning basic visual concepts with a constrained variational framework. In Intl. Conf. on Learning Repr., External Links: Link Cited by: §3.1.
  • [11] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. preprint arXiv:1312.6114. Cited by: §2.
  • [12] A. Kondratyev (2018) Learning curve dynamics with artificial neural networks. SSRN 3041232. Cited by: §2.
  • [13] B. Lim, S. Zohren, and S. Roberts (2020) Detecting changes in asset co-movement using the autoencoder reconstruction ratio. arXiv preprint arXiv:2002.02008. Cited by: §2.
  • [14] S. G. Mallat (1989) Multiresolution approximations and wavelet orthonormal bases of . Transactions of the American mathematical society 315 (1), pp. 69–87. Cited by: §2.
  • [15] A. Razavi, A. Van den Oord, and O. Vinyals (2019) Generating diverse high-fidelity images with vq-vae-2. Advances in neural information processing systems 32. Cited by: §2, §3.1.
  • [16] D. E. Rumelhart, G. E. Hinton, and R. J. Williams (1985) Learning internal representations by error propagation. Technical report California Univ San Diego La Jolla Inst for Cognitive Science. Cited by: §2.
  • [17] K. Sohn, H. Lee, and X. Yan (2015) Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28. Cited by: §2.
  • [18] A. Sokol (2022) Autoencoder market models for interest rates. Technical report CompatibL Workshop. Cited by: §2.
  • [19] Y. Suimon, H. Sakaji, K. Izumi, and H. Matsushima (2020) Autoencoder-based three-factor model for the yield curve of japanese government bonds and a trading strategy. Journal of Risk and Financial Management 13 (4), pp. 82. Cited by: §2.
  • [20] R. S. Tsay (2005) Analysis of financial time series. John wiley & sons. Cited by: §2.
  • [21] A. Van Den Oord, O. Vinyals, et al. (2017) Neural discrete representation learning. Advances in neural information processing systems 30. Cited by: §3.1.