MOGPTK: The Multi-Output Gaussian Process Toolkit

02/09/2020
by   Taco de Wolff, et al.
Universidad de Chile
0

We present MOGPTK, a Python package for multi-channel data modelling using Gaussian processes (GP). The aim of this toolkit is to make multi-output GP (MOGP) models accessible to researchers, data scientists, and practitioners alike. MOGPTK uses a Python front-end, relies on the GPflow suite and is built on a TensorFlow back-end, thus enabling GPU-accelerated training. The toolkit facilitates implementing the entire pipeline of GP modelling, including data loading, parameter initialization, model learning, parameter interpretation, up to data imputation and extrapolation. MOGPTK implements the main multi-output covariance kernels from literature, as well as spectral-based parameter initialization strategies. The source code, tutorials and examples in the form of Jupyter notebooks, together with the API documentation, can be found at http://github.com/GAMES-UChile/mogptk

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/11/2020

Gap Filling of Biophysical Parameter Time Series with Multi-Output Gaussian Processes

In this work we evaluate multi-output (MO) Gaussian Process (GP) models ...
06/01/2017

Function Assistant: A Tool for NL Querying of APIs

In this paper, we describe Function Assistant, a lightweight Python-base...
05/04/2020

ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents

We present ADVISER - an open-source, multi-domain dialog system toolkit ...
08/28/2017

THAP: A Matlab Toolkit for Learning with Hawkes Processes

As a powerful tool of asynchronous event sequence analysis, point proces...
02/04/2018

End2You -- The Imperial Toolkit for Multimodal Profiling by End-to-End Learning

We introduce End2You -- the Imperial College London toolkit for multimod...
04/12/2021

GPflux: A Library for Deep Gaussian Processes

We introduce GPflux, a Python library for Bayesian deep learning with a ...
01/15/2017

DyNet: The Dynamic Neural Network Toolkit

We describe DyNet, a toolkit for implementing neural network models base...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Gaussian process (GP) is a Bayesian nonparametric model for time series, that has had a significant impact in the machine learning community following the seminal publication of

(Rasmussen and Williams, 2006). GPs are designed through parametrizing a covariance kernel, meaning that constructing expressive kernels allows for an improved representation of complex signals. Recent advances extend the GP concept to multiple series (or channels), where both auto-correlations and cross-correlations among channels are designed jointly; we refer to these models as multi-output GP (MOGP) models. A key attribute of MOGPs is that appropriate cross-correlations allow for improved data-imputation and prediction tasks when the channels have missing data. Popular MOGP models include: i) the Linear Model of Coregionalization (LMC) (Goovaerts, 1997), ii) the Cross-Spectral Mixture (CSM) (Ulrich et al., 2015), iii) the Convolutional Model (CONV) (Alvarez and Lawrence, 2009), and iv) the Multi-Output Spectral Mixture (MOSM) (Parra and Tobar, 2017). Training MOGPs is challenging due to the large number of parameters required to model all the cross-correlations, and the fact that most of MOGP models are parametrized in the spectral domain, thus being prone to local minima. Therefore, a unified framework that implements these MOGPs is required both by the the GP research community as well as by those interested in practical applications for multi-channel data.

The multi-output Gaussian process toolkit (MOGPTK) aims to address the need for an MOGP computational toolkit in the form of a Python package that implements the mentioned MOGP kernels and provides a natural way to train and use them. MOGPTK is built upon GPflow (Matthews and others, 2017), an extensive GP framework with a wide variety of implemented kernels, likelihoods and training strategies. GPflow is in turn built upon TensorFlow (Abadi and others, 2016)

, a framework that allows for the construction of computational graphs of tensors and operations which can be calculated on either CPUs or GPUs. Needless to say, GPU-training is much desired due to the ability of graphics cards to perform linear operations in parallel.

2 Existing MOGP libraries and scope of MOGPTK

Previous toolkits for MOGPs include GPmat (Lawrence and others, 2015) (University of Sheffield) through its module called multigp, a MATLAB library that includes sparse approximations and implements multi-output support through convolution processes (Alvarez and Lawrence, 2009). Another library is GPy (GPy, since 2012)

(University of Sheffield), a Python package that implements the Intrinsic Model of Coregionalization (IMC) and LMC kernels. More recently, GPyTorch (Cornell University) is a Python library for general GP modelling that uses PyTorch to facilitate faster training on GPUs 

(Gardner and others, 2018). GPyTorch implements the LMC kernel and the multi-task kernel by Williams (2008). Lastly, GPflow, the framework upon which our work is based, also has multi-output support using the LMC kernel (Matthews and others, 2017).

Neither of the above libraries implement the—by now standard—CSM, CONV or MOSM models described in Section 1. Critically, not all libraries even allow for different numbers of data points per channel. Furthermore, existing libraries give little emphasis on improving training through parameter initialization and they usually lack parameter interpretation. MOGPTK, conversely, facilitates the whole process of implementing an MOGP, from data loading and parameter initialization to model training and interpretation. Our toolkit also implements all main MOGP models mentioned in Section 1.

3 Functionality

The main pillars of MOGPTK are the included MOGP models, data handling, parameter initialization and parameter interpretation, each discussed below.

3.1 Models

MOGPTK considers a base MOGP kernel from which specific kernels are derived. The base kernel provides the functionality to split the input data into multiple channels and process them by sub-kernels. While single-channel kernels implemented in GPflow have input data of shape (with the total number of data points and the number of input dimensions), the MOGPTK base kernel has input data of shape , where the first column contains integers denoting the channel index to which the remaining columns correspond. Then, using the channel indices the base kernel splits the data into its different channels for the sub-kernels to operate. This allows us to manage different amounts of data points per channel, therefore, the total amount of data points can be express as , with the number of channels and the number of data points in channel .

3.2 Data handling

MOGPTK features general-purpose classes to perform common data-analysis operations effortlessly. Data can be loaded from various sources (e.g., CSV files, Pandas DataFrames, or generated using Python functions) and formatted if necessary. For instance, data containing date and/or time values can be automatically converted to structured (numerical) representations for compatibility with the rest of the toolkit. Data can also be pre-processed using included transformations such as detrending or logarithm among others, which can be applied in compositional manner in order for the models to be trained effectively. After training, the transformation can be reverted to the original domain in the same vein of Rios and Tobar (2019). Additionally, MOGPTK allows for removing data ranges to simulate missing data or sensor failure and the data can be easily plotted in time or spectral domain.

3.3 Parameter initialization

Training MOGPs can be challenging due to their large number of hyperparameters and highly complex objective function. In this sense, MOGPTK features two methods for setting appropriate initial conditions for the hyperparameters. The first one is based on training independent spectral mixture kernels

(Wilson and Adams, 2013)

to individual channels and using their spectral Gaussian means and variances to compute the initial parameter values for the multi-output kernels. The second method utilizes the Bayesian Nonparametric Spectral Estimation (BNSE) 

(Tobar, 2018), or the Lomb-Scargle method, to identify hyperparametes from the channels spectral content. Both methods are single-output and thus are trained independently on each channel, therefore, subsequent (maximum likelihood) model training can focus on training the inter-channel cross-correlations.

3.4 Parameter interpretation

Besides training and prediction, MOGPTK also provides interpretation of hyperparameter values via visualization techniques. MOGPTK shows the correlations between channels for different kernels, in the particular case of spectral kernels (e.g., SM, MOSM, CSM, SM-LMC), this reveals the cross-spectral coupling between channels. For instance, when comparing stock values in financial applications we can usually observe correlating quarterly patterns due to the anticipated quarterly reports by stock investors as shown in de Wolff et al. (2020). Additionally, retail markets may also correlate monthly due to salaries being paid at that frequency and thus producing a spike in sales. The interpretation of such relationships is relevant for practical applications that require deeper understanding and analysis beyond mere imputation and extrapolation of data.

4 Example

We next provide a short example of how MOGPTK operates on an air-quality time series of four channels. The dataset Vito et al. (2008) contains hourly-average measurements of five metal oxide chemical sensors embedded in an Air Quality Chemical Multisensor. The data were collected in a polluted Italian city between March 2004 and February 2005. We considered the MOSM kernel, with three spectral components per channel, initialized using BNSE and optimized using BFGS.

Listing 1 shows the corresponding code. We first loaded the (pre-processed) dataset into MOGPTK creating a four-channel model, then for each channel we removed a range of data to simulate sensor failure and additionally removed 30% of the data points randomly. For improved training results, the internal data representation for all channels were linearly detrended and normalized to have zero mean and unit variance using the transformations provided in the toolkit. Next, we set up the MOSM kernel and initialized the parameters using BNSE. Fig. 1 shows the results of solely initialization (not training), where only the main sinusoidal components can be identified. Lastly, the MOSM model was trained using the L-BFGS-B optimizer. With its three components per channel, MOSM featured 65 hyperparameter to train using 439 training points. Training took less than two minutes on an average CPU and was faster when utilizing a GPU. Fig. 2

shows the results of the trained MOSM kernel, where the model was able to accurately interpolate and extrapolate beyond the observation range.

1import mogptk
2
3# Load the pre-processed dataset into MOGPTK
4x_col = ’Time’
5y_col = [’CO(GT)’, ’NMHC(GT)’, ’NOx(GT)’, ’NO2(GT)’]
6data = mogptk.LoadCSV(’data/air_quality.csv’, x_col=x_col, y_col=y_col)
7
8# Remove ranges to simulate sensor failure
9data[0].remove_relative_range(0.2, 0.3)
10data[1].remove_relative_range(0.8, 1.0)
11data[2].remove_relative_range(0.8, 1.0)
12data[3].remove_relative_range(0.0, 0.2)
13
14# Randomly remove points, detrend and whiten (mean=0,var=1)
15for channel in data:
16    channel.remove_randomly(pct=0.3)
17    channel.transform(mogptk.TransformDetrend(degree=1))
18    channel.transform(mogptk.TransformWhiten())
19
20# Initialize parameters using BNSE
21mosm = mogptk.MOSM(data, Q=3)
22mosm.init_parameters(’BNSE’)
23mosm.predict()
24mosm.plot_prediction()
25
26# Optimize parameters using L-BFGS-B
27mosm.train(’L-BFGS-B’)
28mosm.predict()
29mosm.plot_prediction()
Listing 1: Implementation of MOGPTK of a four-channel air-quality dataset: data loading, parameter initialization, model training and prediction.
Figure 1: MOSM kernel prediction using parameters initialized by BNSE, i.e., no likelihood optimization has been performed at this stage. Notice how (uncorrelated) fundamental frequency components can be identified in each channel.
Figure 2: MOSM kernel prediction. Using BNSE parameter initialization as initial condition, the MOSM kernel was trained via maximum likelihood using L-BFGS-B with 500 iterations.

5 Availability and documentation

MOGPTK is released under the MIT license and thus it can be used in both open-source and commercial applications. The source code is publicly available on GitHub at

https://github.com/GAMES-UChile/mogptk/, where contributions are encouraged and issues can be raised. The repository contains tutorials and examples with real-word data in the form of Jupyter notebooks. Additionally, the API documentation describing all methods can be accessed at https://games-uchile.github.io/mogptk. MOGPTK requires atleast Python 3.6 and TensorFlow 2, and can be installed executing pip install mogptk.

Acknowledgements

We are thankful to the Center for Mathematical Modeling (Conicyt AFB #170001), without its invaluable support MOGPTK would only be an idea. We also thank Fondecyt-Iniciación #11171165. We would like to thank Cristóbal Silva, Gabriel Parra, Mario Garrido, and Victor Caro for their feedback on earlier versions of MOGPTK. This document has been formatted using Elsevier’s elsarticle.cls LaTeX document class.

References

  • M. Abadi et al. (2016) TensorFlow: a system for large-scale machine learning. In Proc. of the 12th USENIX Symposium on Operating Systems Design and Implementation, pp. 265–283. Cited by: §1.
  • M. Alvarez and N.D. Lawrence (2009) Sparse convolved Gaussian processes for multi-output regression. In NeurIPS 21, pp. 57–64. Cited by: §1, §2.
  • T. de Wolff, A. Cuevas, and F. Tobar (2020) Gaussian process imputation of multiple financial time series. In IEEE ICASSP (to appear), Cited by: §3.4.
  • J. Gardner et al. (2018) GPyTorch: blackbox matrix-matrix Gaussian process inference with GPU acceleration. In NeurIPS 31, pp. 7576–7586. Cited by: §2.
  • P. Goovaerts (1997) Geostatistics for natural resources evaluation. Oxford University Press. Cited by: §1.
  • GPy (since 2012) GPy: a Gaussian process framework in Python. Note: http://github.com/SheffieldML/GPy Cited by: §2.
  • N. D. Lawrence et al. (2015) GPmat. Note: https://github.com/SheffieldML/GPmat Cited by: §2.
  • A.G. Matthews et al. (2017) GPflow: A Gaussian process library using TensorFlow. Journal of Machine Learning Research 18 (40), pp. 1–6. Cited by: §1, §2.
  • G. Parra and F. Tobar (2017) Spectral mixture kernels for multi-output Gaussian processes. In NeurIPS 30, pp. 6681–6690. Cited by: §1.
  • C.E. Rasmussen and C.K.I. Williams (2006) Gaussian Processes for Machine Learning. MIT Press. Cited by: §1.
  • G. Rios and F. Tobar (2019) Compositionally-warped Gaussian processes. Neural Networks 118, pp. 235 – 246. Cited by: §3.2.
  • F. Tobar (2018) Bayesian nonparametric spectral estimation. NeurIPS 31, pp. 10127–10137. Cited by: §3.3.
  • K. R. Ulrich, D. E. Carlson, K. Dzirasa, and L. Carin (2015) GP kernels for cross-spectrum analysis. In NeurIPS 28, pp. 1999–2007. Cited by: §1.
  • S. D. Vito, E. Massera, M. Piga, L. Martinotto, and G. D. Francia (2008) On field calibration of an electronic nose for benzene estimation in an urban pollution monitoring scenario. Sensors and Actuators B: Chemical 129 (2), pp. 750 – 757. External Links: ISSN 0925-4005 Cited by: §4.
  • C. Williams (2008) Multi-task Gaussian process prediction. In NeurIPS 20, pp. 153–160. Cited by: §2.
  • A. Wilson and R. Adams (2013) Gaussian process kernels for pattern discovery and extrapolation. In ICML, pp. 1067–1075. Cited by: §3.3.