Predicting Daily Trading Volume via Various Hidden States

by   Shaojun Ma, et al.
Georgia Institute of Technology

Predicting intraday trading volume plays an important role in trading alpha research. Existing methods such as rolling means(RM) and a two-states based Kalman Filtering method have been presented in this topic. We extend two states into various states in Kalman Filter framework to improve the accuracy of prediction. Specifically, for different stocks we utilize cross validation and determine best states number by minimizing mean squared error of the trading volume. We demonstrate the effectivity of our method through a series of comparison experiments and numerical analysis.



There are no comments yet.


page 1

page 2

page 3

page 4


Generating Trading Signals by ML algorithms or time series ones?

This research investigates efficiency on-line learning Algorithms to gen...

Kalman Filtering of Distributed Time Series

This paper aims to introduce an application to Kalman Filtering Theory, ...

Cryptocurrency Trading: A Comprehensive Survey

Since the inception of cryptocurrencies, an increasing number of financi...

Iterate Averaging and Filtering Algorithms for Linear Inverse Problems

It has been proposed that classical filtering methods, like the Kalman f...

Mapping Distributions through Hybrid Dynamical Systems and its Application to Kalman Filtering

Many state estimation and control algorithms require knowledge of how pr...

Using Reinforcement Learning in the Algorithmic Trading Problem

The development of reinforced learning methods has extended application ...

WallStreetBets: Positions or Ban

r/wallstreetbets (WallStreetBets or WSB) is a subreddit devoted to irrev...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Trading volume is the total quantity of shares or contracts traded for specified securities such as stocks, bonds, options contracts, futures contracts and all types of commodities. It can be measured on any type of security traded during a trading day or a specified time period. In our case, daily volume of trade is measured on stocks. The volume of trading is an essential component in trading alpha research since it tells investors about the market’s activity and liquidity. Over the past decade, along with the improved accessibility of ultra-high-frequency financial data, evolving data-based computational technologies has attracted many attentions on the financial industry. Meanwhile, the development of algorithmic and electronic trading has shows great potential of trading volume since many trading models require intraday volume forecasts as an key input. As a result, there is growing interest in developing models for precisely predicting intraday trading volume.

Researchers aims to propose various strategies to accomplish trading efficiently in the electronic financial markets, meanwhile they wish to minimize transaction costs and market impact. The study of trading volume generally falls into two lines to achieve the goals. One line of work is focused on giving optimal trading sequence and amount, while another line is investigating the relationships among trading volume and other financial variables or market activities such as bid-ask spread, return volatility and liquidity, etc. Thus a precise model that provides insights of trading volume can be regarded as a basis for two lines of work.

There are several existing methods to estimate future trading volume. As a fundamental approach, rolling means(RM) predict intraday volume during a time interval by averaging volume traded within the same interval over the past days. The concept of RM model is straightforward, but it fails to adequately capture the intraday regularities. One classical publicly available intraday volume prediction model decomposes trading volume into three components, namely, a daily average component, an intraday periodic component, and an intraday dynamic component, then adopts the Component Multiplicative Error Model (CMEM) to estimate the three terms

(cmem). Though this model outperforms RM, the limitations such as high sensitivity to noise and initial parameters complicate its practical implementation. kffortrading propose a new model to deal with the logarithm of intraday volume to simplify the multiplicative model into an additive one. The model is constructed within the scope of a two-state (intraday and overday features) Kalman Filter (kforiginal)

framework, the authors adopt the expectation-maximization (EM) algorithm for the parameter estimation. Though the model provides a novel view to study intraday and overday factors, the flexibility is not satisfied since the model treat the number of hidden states of all stocks as two, thus there may be information loss. Moreover, from experiment we see that the dominant term in the model is actually daily seasonality, the learning process of parameters is not robust.

As an extension of two-state Kalman Fiter, our new model has advantages such as higher prediction precision, stability and simple structure. In general our contributions are:

  • Firstly, we develop a new way that combines cubic spline and statistical process to determine the best degrees of freedom (DOFs) for different stocks.

  • Secondly, by choosing suitable DOFs, we provide a smoothing prediction of traded volume.

  • Finally, we demonstrate that our model outperforms RM and two-state Kalman Fiter through experiments on 978 stocks.

2 Methodologies

We denote the -th observation on day as , the local indices and , we set global index = thus = .

2.1 two-state Kalman Filter Model

Before introducing our method, we would like to review two-state Kalman Fiter model. Within the model, the volume is defined as the number of shares traded normalized by daily outstanding shares:

Figure 1: A graphical representation of the Kalman Filter model: each vertical slice represents a time instance; the top node in each slice is the hidden state variable corresponding to the underlying volume components; and the bottom node in each slice is the observed volume in the market

This ratio is one way of normalization (kffortrading) since the normalization helps to correct low-frequency variation caused by change of traded volume. Log-volume refers to the natural logarithm of traded volume. The researchers train their model with log-volume and evaluate the predictive performance based on volume. The reason for using log-volume is that it converts the multiplicative terms (cmem)

to additive relationships and makes it naturally fit Kalman Fiter framework, moreover, the logarithmic transformation facilitate to reduce the skewness property of volume data

(behaviour). kffortrading’s model is built within Kalman Fiter framework as shown in Figure 1. represents hidden state that is not observable, represents logarithm of observed traded volume. The mathematical updates are:


for , where

is the hidden state vector containing two parameters, namely, the daily average part and the intraday dynamic part;

is the state transition matrix; observation matrix is fixed as ; where ; , and is treated the seasonality; initial state . The unknown system parameters are estimated by closed form equations, which are derived from expectation-maximization(EM) algorithm. For more details of two-state model, we suggest readers review the original paper.

2.2 Our Model: Various-states Kalman Filter

In two-state model mentioned above, the DOF of hidden state variable is two since it has intra-day and over-day two factors. Since there is no systematic way to determine a correct DOF of hidden state variable, especially for various stocks. Our concern is that how to find a better DOF for each stock and predict more precisely. Thus our new method still falls into the Kalman Fiter framework shown in Figure 1, however, we change equation 2 to the most common Kalman Fiter update equation:


The differences among Equation 2 and Equation 3 are as follows: represents hidden state whose dimension is that , as the DOF of hidden state variable, will be determined in Section 2.2.1; state transition matrix is a matrix while observation matrix is a matrix; where transition covariance matrix is a matrix; where observation covariance matrix is a matrix; initial state and observation is a vector. Notice that , , and are uniquely determined by training data. The reason that we use as subscript is that every time we predict one day’s traded volume.

Within the framework of our model, the data we use is historical daily trading volume. We define observation as multiplication of traded volume and olume Weighted Average Price.


Furthermore we model and evaluate performance with the percentage ratio:

1:Training data , shuffle times and cross validation times
2:for DOF = 1,2,…,I do
3:     for i = 1,2,…, do
4:         shuffle
5:         for j = 1,2,…, do
6:              shuffle and split into training set and test set
7:              compute cubic smoothing spline on training set
8:              compute and store MSE on test set

              compute and store standard error(SE) of each MSE

10:         end for
11:         find best DOF by “one standard error rule”
12:     end for
13:end for
Algorithm 1 Find DOF by Cross Validation

2.2.1 DOF of State Space

In our assumption, different stocks will have distinct number of parameters in hidden state . For a specific stock, we call the number of elements in as degrees of freedom(DOFs), thus for stock with index , . The key concern is how to determine DOFs for each stock. By experiment, we find that seasonality dominates the prediction of traded volume in two-state model, and in both of two-state model and our model, =77, which means each day for each stock we have 78 observations and has 78 parameters. We look for including seasonality in the hidden states and drop to avoid dominant term. In terms of avoiding overfitting and reduce computations, we use cubic spline to fit observations smoothly in each day. Given a series of observations , cubic spline is to find the function that minimizes:


where in our case, is a nonnegative tuning parameter. The function that minimizes 6 is known as a smoothing spline. The term encourages to fit the data well, and the term is regarded as a penalty that controls smoothness of the spline. By solving 6 we have:


where , as the solution to 6 for a particular choice of , is a -vector containing the fitted values of the smoothing spline at the training points . Equation 7 indicates that the vector of fitted values can be written as a matrix times the response vector y. Then the DOF is defined to be the trace of the matrix . Our first purpose is to give a specific DOF and get a corresponding spline. Thanks to the work of B. D. Ripley and Martin Maechler (, we are able to get fitting splines when given reasonable DOFs. After fitting process then we use cross validation to find DOF that achieves lowest mean squared error(MSE). Algorithm 1 outlines the mechanism of finding DOF of each stock. We analyze the DOFs of 978 stocks, examples of the distribution of DOFs and best DOFs of some stocks are shown in Figure 2.

(a) Distribution of best DOFs
(b) Best DOF of AAPL
(c) Best DOF of JPM
Figure 2: DOFs of states

2.2.2 Kalman Filter

Given the best DOF from our method, then we use Kalman Filter to do predictions. Kalman Filter is an online algorithm to precisely estimate the mean and covariance matrix of hidden states. Suppose parameters in Equation 3 are known, Algorithm 1 outlines the mechanism of the Kalman Filtering. We model the distribution of hidden state conditional on all the percentage observations up to time . Since we suppose and in Equation 3

are Gaussian noise, thus all hidden states will follow a Gaussian distribution and it is only necessary to characterize the conditional mean and the conditional covariance as shown in line 3 and line 4. Then given new observation we correct the mean and covariance in line 7 and line 8.

3:for t=0,1,2,…,T-1 do
4:     predict next state mean:
5:     predict next state covariance:
6:     obtain measurement
7:     compute Kalman gain:
8:     update current state mean:
9:     update current state covariance:
10:end for
Algorithm 2 Kalman Filtering

Our ultimate goal is to make predictions of and by Algorithm 1 and , respectively. Thus we need to estimate parameters precisely. The method to calibrate parameters is expectation-maximization(EM) algorithm. Smoothing process infer past states before conditional on all the observations in the training set, which is a necessary step in model calibration because it provides more accurate information of the unobservable states. We outlines Kalman smoothing process as Algorithm 3.

1:Parameters and from Kalman Filtering
2:for t=T-1,T-2,…,0 do
3:     compute:
4:     compute mean:
5:     compute covariance:
6:     compute joint covariance:
7:end for
Algorithm 3 Kalman Smoothing

After performing Algorithm 1, 2 and 3, we need to estimate parameters by EM method, as shown in algorithm 4. EM algorithm is one common way to estimate parameters of Kalman Filter problem. It extends the maximum likelihood estimation to cases where hidden states are involved (emalg). The EM iteration alternates between performing an E-step (i.e., Expectation step), which constructs a global convex lower bound of the expectation of log-likelihood using the current estimation of parameters, and an M-step (i.e., Maximization step), which computes parameters to maximize the lower bound found in E-step. Two advantages of EM algorithm are fast convergence and existence of closed-form solution. The derivations of Kalman Filter and EM algorithm beyond the scope of this paper, we refer interested readers to kforiginal’s work for more details.

1:Initial , results from Kalman Filtering and Kalman smoothing
2:while not converge do
3:     for t=T-1,T-2,…,0 do
6:     end for
12:end while
Algorithm 4 EM algorithm

3 Experiment

3.1 Data Introduction

Our collect empirically analyze intraday volume of 978 stocks traded on major U.S. markets. For example, a glance of the information of stock "AAPL" is summarized in Table 1. The data covers the period from January 3rd 2017 to May 29th 2020, excluding none trading days. Each trading day consists of 78 5-minute bins. Volume and percentage are computed as Equation 4 and 5 respectively. All historical data used in the experiment are obtained from the Invesco Inc.

1/3/2017 106.24 105.9 106.42 105.59 1139228 106.11 14:30
1/3/2017 105.28 106.24 106.34 105.19 1245847 105.77 14:35
1/3/2017 105.51 105.29 105.53 104.85 1289865 105.23 14:40
1/3/2017 106.23 106.04 106.23 106 1675070 106.09 20:55
5/29/2020 318.25 319.25 320 318.22 747433 319.21 13:30
5/29/2020 317.92 319.29 319.62 317.46 1969841 318.92 19:55
Table 1: Historical intrady trading volume of stock "AAPL"

3.2 Experiment Set-up

We choose two-state model and RM model mentioned before as baselines. Data from January 3rd 2017 to June 30th 2017 is considered as training set while data from July 5th 2017 to January 2nd 2018 is treated as test set. Both of training set and test set contain =125 trading days(from day to day ). We initialize as identity matrices, and as smooth matrix, then perform Algorithms 1 to 4 on traning set to obtain model parameters, finally make predictions on next =125 days(from day to day ). We evaluate performances of three models by mean absolute percentage error(MAPE):


where .

3.3 Results

In this section we compare our model with two state-of-the-art baselines and perform some analysis of our results.

3.3.1 MAPE Distribution

We obtain the distributions of MAPE of 978 stocks from three models and show them in Figure 3. We see that our v-state model outperforms baselines by giving smaller MAPEs.

(a) RM
(b) Two-state
(c) our model: v-state
Figure 3: Comparison of MAPE
Figure 4: comparison of prediction: (a) to (d):baseline models on AAPL, (e) to (h):our v-state model on stock "AAPL"

3.3.2 Predictions on Specific Days

To better visualize comparisons, we pick ten stocks out of the dataset and show their predictions on day 150, 175, 200 and 225. Due to space limitation, we only show one stock "AAPL" here in Figure 4 and show other nine stocks in Appendix. We see that two-state model almost overlaps RM model. Our v-state model provides a smoother prediction.

3.3.3 Analysis of v-state Model

Figure 5: Relationship between error and true percentage

We investigate the relationship between absolute error and true percentage for all stocks to further test the precision of our model. The absolute error of stock on -th bin is defined as:


As illustrated in Figure 5, we plot error versus for all 97812578 samples. We see there is a nearly linear relationship between absolute error and true percentage, when gets larger, the slope gets closer to 1. Moreover, we observe that for 95% samples that fall into the corner around the original point, we have:


And for those samples outside of the corner we plug in the linear equation with slope :


From Equation 10 and Equation 11, our model provides a lower bound as well as a upper bound for the prediction precision. Due to the high-noisy property of original data, it is still not a trivial task to fully capture the movements of trading volume. It could be one potential direction in the future.

(a) AAPL
(b) JPM
(c) AAPL
(d) JPM
Figure 6:

Correlation matrix of hidden states, eigenvalues of transition covariance matrix, "AAPL" and "JPM"

We also show examples of correlation matrix of hidden states, eigenvalues of transition covariance matrix in Figure 6

and appendix. It suggests that most of states don’t highly correlate to each other, meanwhile a few states that share linear relationships. We find that half of the eigenvalues are very close to 0. Notice that the eigenvalues from a covariance matrix inform us the direction that the data has the maximum spread, the eigenvectors of the covariance matrix will be the direction with the most information. The information of correlation matrix and eigenvalues of covariance matrix could help us further simplify the number of states since more states in the model brings more computations. We also leave this as one of future directions.

4 Conclusion

On the basis of Kalman Filter, we develop a new method to determine the dimension of hidden state for each stock. Our methods provides smoothing predictions of intraday traded volume, shows an potential of using gradient based method for further analysis. Through experiments we demonstrate that v-state model gains better prediction precision than two-state model and RM model. Inspired by a series of model analysis, in the future we will conduct further research on reducing and number of DOFs and selecting states.