Volatility prediction in financial markets is of great practical and theoretical interest. Volatility plays crucial roles in financial markets, such as in derivative pricing, portfolio risk management, and hedging strategies. Therefore, it is demanding to find a good method to more accurately forecast volatility.
According to Herbert Simon, actors begin their decision-making process by attempting to gather information . Nowadays, information gathering often consists of searching online sources. Hence search volumes of key words related to finance may reveal market sense and and focus of investors. Similar to the Google Trends, the Baidu Index, based on Baidu Search, shows aggregated information on the volume of queries for different search terms and how these volumes change over time. Unlike Google trends, Baidu Index’s search volume data can not be downloaded. The 28 key words we use in this article all come from manual collection.
In this study, we investigate the intriguing possibility of analyzing search query data from Baidu Index, modeled by Long Short-Term Memory neural network, to show the feasibility of predicting the stock market volatility through the search volumes. To our best knowledge, there has been no previous attempt to deploy LSTM networks on a large and liquid Baidu Index’s searching volume to assess its performance in stock market volatility prediction tasks. Besides, we compare the results of the LSTM against GARCH.
The increasing volumes of ’big data’ reflecting various aspects of our everyday activities represent a vital new opportunity for scientists to address fundamental questions about the complex world we inhabit [11, 12, 15]. Baidu index and Google search volumes, do not only reflect aspects of the current state of the economy, but may also provide some insight into future trends in the behavior of economic participants. Yu, Zhang (2012), taking Baidu search terms as an agent variable of personal investor concern, found that Baidu search volume can cause positive price pressure on the market in the current period, and this pressure will quickly reverse in next periods. Meanwhile, they found that investors’ attention on non-trading days is significantly related to the price jump of stocks on the next trading day . Baidu search volume is also used as a proxy variable for individual investors’ attention in Zhao, Lu and Wang(2013). The authors analyzed the relationship between Baidu search volumes and the stock returns of 1301 stocks in Growth Enterprises Market Board, and found that there is a positive correlation between search volume and stock returns . Da et.al(2011), proposed a measure of investor attention using search frequency in Google (Search Volume Index(SVI)), provided evidence that SVI captures the attention of retail investors, and found the relation between investor attention and asset prices . Using historic data from the period between Jan 2004 and Feb 2011, T Preis et.al(2013), found that detectable increases in Google search volumes for keywords relating to financial markets before stock market falls .
Prediction tasks on financial time series are notoriously difficult, primarily driven by the high degree of noise and the generally accepted, semi-strong form of market efficiency 
. Meanwhile, there are plenty of well-known capital market anomalies that are in stark contrast with the notion of market efficiency. In the past years, initial evidence has been established that machine learning techniques are capable of identifying (non-linear) structures in financial market data[10, 13, 2]. Petersen, A et.al(2017) applied LSTM networks to all S&P 500 constituents from 1992 until 2015 .
2. Data description and preprocessing
In this work, we study the CSI 300 index based on publicly available daily data comprising high, low, open, close, and close prices. Daily returns are evaluated as the log difference of the close price, while daily volatility
is estimated using the high, low, open and close prices in the following equation.
We remark that this definition is the best among all quadratic combination under some criteria .
Starting from June 1st 2006, Baidu has been collecting the daily volume of searches from personal computer related to various aspects of macroeconomics. This database is available to the public as the Baidu index PC trend . Unfortunately, the Baidu index can not be downloaded.  and  have shown correlations between Baidu index and the equity market. In this work, we use this trend data as a representation of the public interest in various macroeconomic factors.
For this study, we include 28 domestic trends which are listed in Table 1 with their abbreviations.
|Exact search terms||Abbreviation|
We use to denote the aggregated data,
We split the whole data set into a training set (80%) and a test set (20%). The training set ranges from 1-June-2006 to 17-July-2015 while the test set ranges from 20-July-2015 to 27-Oct-2017. Additionally, it is worth noting here that all these 30 time series are stationary in the sense that their unit-root null hypotheses have p-values less than 0.05 in the Augmented Dickey-Fuller test .
Preprocessing the time series with different observation interval and normalization window may cause corresponding difference of causality between the input and output. Let be the observation interval
In this study, we aim to predict volatility, so we denote the next period volatility by ,
We use moving average values to normalize the above observed data. With a look-back window , a time series is normalized to defined by
Each combination of and should determine an observation and normalization scheme with its unique predictive power. We denote these schemes as and the resulting data as . In principle, one may apply learning models on each scheme and evaluate the accuracy of prediction on a validation set such that the optimal scheme can be chosen. Alternatively, an information metric can be set up to select the optimal scheme which maximizes this metric. In this work, we use the mutual information between and for each pair .
Let us make a brief introduction to mutual information. For any discrete random variable pair, let
be the joint probability function of. Let and be the marginal probability function of and respectively. The mutual information between and is defined as
Assuming conditional independence between the input variables in , the mutual information can be broken down into a sum of the individual components of with . Therefore we choose to maximize
One may try to use the time series to empirically compute according to (2.6). However, note that the values in should be unique due the accuracy of real data. More precisely, one has
Then a direct calculation gives
which only depends on the sample size. Thus applying mutual information in this way can not reveal any relation between and . Considering this, we divide the data into small groups and regard values in each group as one point. Precisely, if is a sample with size , set . Take a positive integer , which is interpreted as the group number. Now divide the interval evenly into subintervals . Then set
Similarly, divide the interval evenly into subintervals . Then we define the marginal law function as
and the joint law function as
Then we define the empirical mutual information between series and as
which indicates we should not take too large . In our study, we take .
Figure 1 shows the mutual information for different combination of . Clearly, when the normalization window is around 5, the mutual information is maximized. Another obvious phenomenon is that, the longer the observation interval , the larger the mutual information. However, due to the limited sample size, we can not choose too large . To retain sufficient sample data, we select
Note that corresponds to weekly return and volatility, which is also a consideration of our choice.
LSTM networks, introduced by Hochreiter and Schmidhuber (1997) and were furthered in the following years by Gers et al. (2000) and Graves and Schmidhuber (2005), belong to the class of recurrent neural networks (RNNs). LSTM networks are designed to learn long-term dependencies and are capable of vanishing and exploding gradients.
LSTM networks contain an input layer, one or more hidden layers, and an output layer. The number of explanatory variables (feature space) equal to the number of neurons in the input layer. The output space is determined by the quantity of neurons in the output layer. The hidden layer(s) contains the memory cell, which is the unique part of LSTM networks. Through three gates in each of the memory cells, the network maintains and adjusts its cell state: an input gate , a forget gate , and an output gate . Figure 2 shows the structure of a memory cell.
In figure 2,
stands for the input vector at time step t, the information flowand the volatility estimation are computed from the former step. and are weight matrices, and
are bias vectors;represent the values of each gate; and are the cell gate and candidate values; is the output of vector at time t.
At step 1, the previous cell state is determined by the LSTM layer how much it should be forgotten. Given , the bias term of the forget gate, can be computed as
where the function is defined by
At step 2, the LSTM layer determines which information should be added to the network’s cell states:
In the last step, the output is computed through the following two equations:
We apply the deep learning library Keras to estimate the coefficients by training in python 3. Specifically, the lag of the LSTM is set at 50, and the bach contains 5 examples, time step is 5. The objective loss function we choose in the model is mean absolute percent error (MAPE). When we set the epochs at 200, the MAPE of the test set will drop to the minimum. We use 20% of the observed data as the test set. Moreover, in order to evaluate the performance of the LSTM, we apply one autoregressive model (GARCH) as benchmark model.
4. Results and discussion
In figure 3, the observed volatility together with the predicted values is plotted. As we can see in the figure, the predicted values fit the actual volatility in decent accuracy, especially when the actual volatility is small.
As we have indicated in the last chapter, MAPE as the loss function, is shown in table 2. In terms of mean square error (MSE), the LSTM also performs better than the benchmark model.
Our LSTM model avoids significant over-fitting as the MAPE evaluated in the test set is 17%, which is close to the MAPE (%15.6) in the training set. The MSE of the LSTM model is 2% which is far smaller than the benchmark model, as listed in table 2.
We chose the intermediate timescale with hand collecting Baidu Index’s search volume to investigate the performance of the LSTM model on stock market volatility prediction. If we have high-frequency Baidu Index’s searching volume, we may also forecast high-frequency volatility, which is much more interesting and practical. The input of the neural network could be replaced by the market micro-structure information . By maximizing the mutual information, we can find suitable input feature set, normalization window and observation interval.
When we want to expand the usage scenario of the model, such as studying the volatility of individual stock or the volatility in different financial market, we could change the input or the structure of the LSTM layer. In this paper, we take the Baidu Index’s searching volume as the hidden market state, and discuss the feasibility of using it to estimate the volatility of CSI 300. The LSTM model is far more efficient than the benchmark model. We have investigated the autocorrelation and partial autocorrelation functions of the error series in the test set, which shows that the prediction error has no memory of itself.
In this work, we have successfully demonstrated that the LSTM network is able to effectively extract meaningful information from noisy financial time series data. We consider the Baidu Index’s searching volume as proxy variables. Together with (or as) the market information, they shows the power of reflecting the CSI300 daily volatility change. We apply one layer LSTM neural network and select 80% of the whole data set as the training set. In terms of MAPE and MSE, the LSTM model outperforms the benchmark model. In addition, we collect 28 key words of the daily search volume of the PC end. Considering the development of mobile technology and the increase of mobile device search, it is not appropriate to estimate the CSI 300 volatility only using the PC end search. This is also the main limitation of our paper. In the future research, we will collect searching volumes from mobile device to perfect our forecast.
Acknowledgments. Any reader who is interested in our study may contact the corresponding author to get the data we used.
-  Abergel, F., Bouchaud, J. P., Foucault, T., Lehalle, C. A., & Rosenbaum, M. (2013). Market Microstructure: Confronting Many Viewpoints.
-  Dixon, M., Klabjan, D., & Jin, H. B. (2015). Implementing deep neural networks for financial market prediction on the Intel Xeon Phi. The Workshop on High PERFORMANCE Computational Finance (Vol.101, pp.1-6). ACM.
-  Fama, E. F. (1970). Efficient capital markets: a review of theory and empirical work. Journal of Finance, 25(2), 383-417.
-  Fischer, T., & Krauss, C. (2017). Deep learning with long short-term memory networks for financial market predictions. Fau Discussion Papers in Economics.
-  Garman, M. B., & Klass M. J. (1980). On the Estimation of Security Price Volatilities from Historical Data. Journal of Business, 1(67).
-  Gers, F. A., Schmidhuber, J., & Cummins, F. (2000). Learning to forget: continual prediction with lstm. Neural Computation, 12(10), 2451-2471.
-  Graves, A. (2005). 2005 Special Issue: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Elsevier Science Ltd.
-  Haşim Sak, Andrew Senior, & Françoise Beaufays. (2014). Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. Computer Science, 338-342.
-  Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780.
-  Huck, N. (2009). Pairs selection and outranking: an application to the s&p 100 index. European Journal of Operational Research, 196(2), 819-825.
-  King, G. (2011). Ensuring the data-rich future of the social sciences. Science, 331(6018), 719-21.
-  Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabasi, A. L., & Brewer, D., et al. (2009). Social science. computational social science. Science,323(5915), 721-3.
-  Moritz, B., & Zimmermann, T. (2016). Tree-based conditional portfolio sorts: the relation between past and future stock returns. Social Science Electronic Publishing.
-  Olah, C., 2015. Understanding LSTM Networks. URL http://colah.github.io/posts/2015-08-Understanding-LSTMS/.
-  Petersen, A. M., Tenenbaum, J. N., Havlin, S., Stanley, H. E., & Perc, M. (2012). Languages cool as they expand: allometric scaling and the decreasing need for new words. Scientific Reports, 2(12), 943.
-  Preis, T., Moat, H. S., & Stanley, H. E. (2013). Quantifying trading behavior in financial markets using google trends. Scientific Reports, 3(1684), 1684.
-  Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69(1), 99-118.
-  Yu, Q., & Zhang, B. (2012). Investors’ limited consentration and equity return–an empirical study using Baidu index as indicator for concentration. Journal of Financial Research, (8), 152-165.
-  Zhao, L., Lu, Z., & Wang, Zhi. (2013). Equity selection in Baidu–an empirical study on the relation between equity return and Baidu search volume. Journal of Financial Research, (4), 183-195.
-  Zhi, D. A., Engelberg, J., & Gao, P. (2011). In search of attention. Journal of Finance, 66(5), 1461-1499.