The Challenge of Predicting Meal-to-meal Blood Glucose Concentrations for Patients with Type I Diabetes

03/29/2019 ∙ by Neil C. Borle, et al. ∙ University of Alberta 0

Patients with Type I Diabetes (T1D) must take insulin injections to prevent the serious long term effects of hyperglycemia - high blood glucose (BG). Patients must also be careful not to inject too much insulin because this could induce hypoglycemia (low BG), which can potentially be fatal. Patients therefore follow a "regimen" that determines how much insulin to inject at certain times. Current methods for managing this disease require adjusting the patient's regimen over time based on the disease's behavior (recorded in the patient's diabetes diary). If we can accurately predict a patient's future BG values from his/her current features (e.g., predicting today's lunch BG value given today's diabetes diary entry for breakfast, including insulin injections, and perhaps earlier entries), then it is relatively easy to produce an effective regimen. This study explores the challenges of BG modeling by applying several machine learning algorithms and various data preprocessing variations (corresponding to 312 [learner, preprocessed-dataset] combinations), to a new T1D dataset containing 29 601 entries from 47 different patients. Our most accurate predictor is a weighted ensemble of two Gaussian Process Regression models, which achieved an errL1 loss of 2.70 mmol/L (48.65 mg/dl). This was an unexpectedly poor result given that one can obtain an errL1 of 2.91 mmol/L (52.43 mg/dl) using the naive approach of simply predicting the patient's average BG. For each of data-variant/model combination we report several evaluation metrics, including glucose-specific metrics, and find similarly disappointing results (the best model was only incrementally better than the simplest measure). These results suggest that the diabetes diary data that is typically collected may not be sufficient to produce accurate BG prediction models; additional data may be necessary to build accurate BG prediction models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

Patients with Type I Diabetes (T1D) must take insulin injections to prevent the serious long term effects of hyperglycemia – high blood glucose (BG). These patients must also be careful not to inject too much insulin because this could induce hypoglycemia (low BG), which can potentially be fatal. Patients therefore follow a “regimen” that, based on various measures, determines how much insulin to inject at certain times. Current methods for managing this disease require adjusting the patient’s regimen over time based on the disease’s behavior (recorded in the patient’s diabetes diary).

If we can accurately predict a patient’s future BG values from his/her current features (e.g., predicting today’s lunch BG value given today’s diabetes diary entry for breakfast, including insulin injections, and perhaps earlier entries), then it is relatively easy to produce an effective regimen. This study explores the challenges of BG modeling by applying a number of machine learning algorithms, as well as various data preprocessing variations (corresponding to 312 [learner, preprocessed-dataset] combinations), to a new T1D dataset that contains 29 601 entries from 47 different patients. Our most accurate predictor is a weighted ensemble of two Gaussian Process Regression models, which achieved an loss of mmol/L [New](48.65 mg/dl). This was an unexpectedly poor result given that one can obtain an of mmol/L [New](52.43 mg/dl) using the naive approach of simply predicting the patient’s average BG. [New]For each of data variant and model combination we report several evaluation metrics, including glucose-specific metrics, and find similarly disappointing results – again, the best model was only incrementally better than the simplest measure. These results suggest that the diabetes diary data that is typically collected may not be sufficient to produce accurate BG prediction models; additional data may be necessary to build accurate BG prediction models.

1 Introduction

Individuals suffering from Type I diabetes (T1D) are unable to produce insulin, meaning their bodies cannot properly regulate their blood glucose (BG[1]i.e., cannot maintain their BG between 4 – 8 mmol/L [2]. As a result, T1D is a serious long term condition that can lead to microvascular, macrovascular, neurolgical and metabolic complications [1, 2].

To manage their diabetes, patients give themselves periodic injections of insulin as directed by their health care team. Injecting too much insulin may induce hypoglycemia (BG mmol/L, in our study), which can be dangerous, possibly causing a coma. However, injecting too little insulin can result in hyperglycemia (BG mmol/L, in our study), which may lead to chronic complications such as blindness, kidney failure, nerve damage and circulatory problems [1, 2]. In general, a patient’s BG will depend on many factors, including past carbohydrate intake, the amount of bolus/basal insulin injected, exercise, and stress [2].

Diabetes patients try to properly maintain their BG in a normal range. This is challenging because tight glycemic control using bolus insulin injections (whether intermittent with insulin pens or boluses using insulin pumps) is associated with an increased the risk of having hypoglycemic events [1]. This challenge has led to attempts to create closed-loop systems and the use of computational techniques that assist in controlling patient’s BG levels [3]. An extreme example of this is the effort to create an “artificial pancreas”, which explicitly integrates automatic monitoring with automatic administration of insulin [4].

Another perspective on fully automated diabetes management views the BG control problem as two sequential subproblems:

  1. “modeling”: learning an accurate BG prediction model that, for example, predicts the BG level at lunch given a description of the subject throughout breakfast (including perhaps her previous BG values, carbohydrate intake, etc., from earlier meals), as well as the amount of insulin injected at breakfast.

  2. “controlling”: given the current information (at breakfast), consider the effects of injecting various possible amounts of insulin – e.g., {1 unit, 1.5 units, 2 units, …}. For each, use the learned model to predict the BG value at lunch, then inject the amount that is predicted to lead to the best lunch time BG-value (Of course, this assumes that decisions made at breakfast only affect lunch, then lunch decisions will only affect dinner, etc. – which does not consider the longer-range effects of actions; see Bastani [3]).

This paper focuses on the first subtask: developing a BG

 prediction system[New]. We use machine learning techniques to learn models that can be used to estimate an individual’s future BG using covariates that describe the current patient. In, where in general, a model

will predict the blood glucose at the next time point ( minutes into the future), based on information currently known about this patient, including the amount of insulin (bolus) the patient decided to inject (Note that this abstracts some issues; see Appendix A.1 for details):

(1)

[New](Think of predicting the blood glucose at your 12:01pm lunch on Tuesday,An example of this subtask might be to predict an individuals blood glucose at 12:01pm lunch on Tuesday, given information collected up-until 8am breakfast on Tuesday[New]. Note that this could be only(Where this could only include the Tuesday breakfast information, or it could include other earlier information – e.g., the ellipses in Equation 1 might contain information about events from yesterday, or last week).

[New]Explicitly, the goal of this work is to to determine if it is possible to accurately predict a T1D patient’s recorded BG, from one meal to the next, based only on the information typically recorded by the patient. To do this, a model must be able to deal with the data provided in a patient’s diabetes diary, which has varied prediction horizons.

[New]Last paragraph of the introduction moved here

This work is an extensive effort to learn an accurate BG prediction model, which involved exploring 312 different combinations of learner and preprocessing variant. To train and evaluate each of these variants, we used a dataset of 29 601 entries collected from 47 unique patients, where each entry included the information typically collected, including: the time of day, the patient’s current BG, the carbohydrate about to be consumed and the anticipated exercise.

1.1 [New]Limitations of Prior Diabetes Modeling StudiesBackground Literature and its Limitations

As the mechanism of diabetes are not completely understood, we are considering the machine learning approach of learning a model, from a large labeled dataset, containing many records from many Type-1 diabetes patients. Of course, this requires access to such a dataset. Previous studies have been based on data from small numbers of subjects and/or data collected over a short time period. For example, several studies have been based on data from a single patient where records were only collected for fewer than 100 days [5, 2, 6, 7]. These studies are limited because different patients behave very differently. Other studies include more patients (12 – 15) but only have 3 – 22 days worth of data [8, 9]. Another study used three patients with two years of data [10]. [New]Further, studies analyzing continuous glucose monitoring (CGM) data from larger patient sets (89 T1D patients) exist but are again sampled over relatively short periods of time (1 week) [11]. In these last [New]threefour cases, datasets either had short histories for their patients or they had a small number of patients in total. In contrast, our work uses a larger number of patients who had up to two years worth of data, where records were collected multiple times each day.

There are large datasets of type-2 diabetes patients – e.g., Quinn et al. [12] measured glycated hemogloben changes in data collected data from 163 patients over the course of a year. However, studies that model type-2 diabetes [13, 14]

should not be directly compared to those that model type-1 diabetes because there are significant differences between these diabetes types. In particular, there is less variance in the blood glucose readings over time for type-2 patients than there is for type-1 patients, making type-2 patients easier to model.

While we focus on predicting BG values many hours later, some studies instead attempt to predict the occurrence of hypoglycemic events, and only within a short window (e.g., 30 to 120 minutes) [15, 16, 8, 17, 18]. [New]Indeed, a literature review by Contreras et al. identified 49 publications using modeling techniques for blood glucose prediction (primarily with T1D patients) of which 38 used predictions horizons that were 60 minutes or less [19]. Of these, one of these publications included prediction horizons of 180 minuets but on simulated data, and another had 1440 minute prediction horizons but used a dataset of 8 patients collected over only 3 days [19]. While this might help to protect patients from a very serious situation, it is lacking in several ways. First, such fine-grained measurements are often not practically obtainable outside of a study setting and without using a [New]continuous glucose monitoring (CGM)CGM device that provides measurements every 5 minutes. Second, these short-term predictions are not adequate for spanning the time between meals. Third, the goal of building a diabetes control system is better served with a more expressive model, as opposed to one that can only provide binary classifications – hypoglycemic or not. Note that these model provides no useful feedback for situations where patients are hyperglycemic.

In our work, we try to model blood glucose dynamics (including both hyperglycemia and hypoglycemia) and using only the standard records collected at meal times. While this makes our task more challenging, we do this because it involves only the data that medical professionals most often encounter in practice.

[New]

1.2 Additional Background

This section describes several techniques within the diabetes modeling literature, including the approaches that we used.

Like Pappada et al. [20]

, we also considered neural network models. After training on 17 T1D patients, Pappada 

et al.’s [20] evaluation, on a single held-out patient, yielded “scores” (a version of the rL1 measure defined in Equation 5) of 0.067 (resp., 0.089, 0.117, 0.145, 0.166, 0.189) when using predictive windows of 50 (resp., 75, 100, 120, 150 and 180) minutes in the future. While they were able to achieve a score of 0.189 using 180 minute predictive windows, note that their result is based on the test data of a single patient (other patients may be more difficult to predict) and that the dataset used in their study involved relatively short-range predictions (50 – 180 minutes), while our study involves predictions made, on average, 310.6 minutes in the future (averaging 593 minutes for overnight predictions and 236 minutes otherwise). Since Pappada et al. showed that larger predictive windows decrease the accuracy of their models, we expect that our data should be more difficult to model well. Also, their study involved only 3 to 9 days of data with continuous glucose monitoring, whereas our data were collected over a period of months to years.

Our work resembles previous works [21, 22] that use Gaussian Process Regression (GPR) for modeling diabetes. In particular, Duke [22]

used GPR to learn models of individual patients that could be used to aid in cross-patient prediction. We similarly explore some transfer learning techniques with GPR, along with ensembles of learners and various other machine learning algorithms.

Prior works have also addressed the blood glucose modeling problem that we explore in this work [2, 6, 7]. These latter two evaluate their results using normalized blood glucose values; since they are unitless normalized values, they cannot be directly interpreted in terms of mmol/L, which means that we cannot compare our results to theirs. However, we do evaluate the performance of a model that is similar to the Gaussian Wavelet Neural Network used in the third [7].

[New] One work of interest to our study is the previously mentioned 2018 publication by Gadaleta et al. [11] that analyzes CGM data. In comparison to our work, this work looks at shorter term blood glucose predictions over a shorter period of time but is similar in that it provides a fairly comprehensive analysis of the predictive performance of many different machine learning models. Gadaleta et al. focus on two different methods for training and evaluating models, “static” and “dynamic”. Similarly to their “dynamic” training process, the majority of our study focuses on training and testing on data from each patient separately. However, the stacking models used in our work effectively train a “static” model with a Leave One Patient Out and combine this model with a patient-specific model. Gadaleta et al. consider combining “static” and “dynamic” training processes as future work.

[New]There are other works that define measures for evaluating the quality of glucose predictors, in general. In particular, Del Favero et al. [23] describes several different measures for comparing a patient’s specific glucose reading, with a predicted one, including both standard measures (like L1, relative L1 error, and L2 losses – there called MAD, MARD and RMSE) and some “glucose-specific metrics”, such as gMAD and gRMSE [24]. While our paper focuses on the L1 and relative L1 errors, we also include the others mentioned there.

1.3 Main Contributions

Below we list the main contributions of this work:

  1. To our knowledge, this study examines the largest multi-year dataset of diabetes diary records, collected from Type 1 diabetes patients, used for modeling future BG.

  2. We provide a comprehensive study of this data, considering 312 combinations of learning algorithm and type of data, to determine if machine learning can be used to create an accurate blood glucose prediction model.

  3. Our results demonstrate that it is difficult for a machine learned model to perform better than a naive baseline (in this case, predicting a patient’s average BG

    ), [New]when considering both standard error measures, like L1 and L2 loss, and also for glucose-specific measures, such as gMAD.

The [New]publically available MSc thesis [25] that corresponds to this work provides additional information, including a breakdown of the individual patients in the study, more detailed results, and a comparison to a diabetologist’s performance on this prediction task.

2 Materials and Methods

Section 2.1 first summarizes how we obtained this (real world) data; Section 2.2 then describes the pre-processing steps required to make this data usable. We then consider two ways to modify this dataset: Section 2.3 considers modifying the set of records; one class of studies involves the complete set of entries, and another class included just the subset of “Expert Predictable” records (defined below). For each of these two “sets of record”, we consider various “feature sets” – the original set of features, and also 12 other variants, each of which includes various new features, that are combination of those original features; see Section 2.4. Section 2.5 then summarizes the 12 different learning algorithms we considered (based on 7 distinct base learners); and Section 2.6.1 describes a trivial baseline, to help us determine whether the results of any of the learned models are actually meaningful. This requires describing how we evaluate the quality of a learned model – see Section 2.6. (This segues naturally to Section 3, which provides those empirical results.)

2.1 Dataset

This study used 47 patient histories from Type I diabetes patients, which were collected using the “Intelligent Diabetes Management” (IDM) software(described in Ryan et al. [26]). Note that the associated website https://idm.ualberta.ca/ has since been decommissioned. This data included patients who participated in Ryan et al.’s study, as well as additional patients who began using the IDM software after the completion of the study (up until December 2016). The participants gave their informed written consent, and the Research Ethics Board of the University of Alberta approved the collection and analysis of the data. For further details regarding patient participation, see Ryan et al. [26]. Some of the participants only used the system a few times. As we wanted to focus on patients that had sufficient information to find relevant patterns, we only included patients who made at least 100 diabetes diary entries with the system – i.e., produced at least 100 “sufficient” records. This led to a dataset of 16 pump users and 31 non-pump patients. Table 1 provides summary statistics for our data. The dataset used for this work differs from the one described in Borle [25] in that we limit the number of patients included to those with complete data.

Patient #16 is noteworthy for having by far the most records of any patient in our dataset; it is unusual for a patient to consistently produce diabetes entries over the course of many years. Because of the large number of records, we use part of this patient’s dataset as our hyper-parameter tuning (validation) dataset, as well as for visualization.

Distinct Patients Age Height Weight Sex Pump Users
47 cm* kg 47 (9 / 38 ) 16

* Height could not be obtained for 7 individuals, so this average value was calculated using only the remaining patients. See Borle [25] for more details about individual patients.

Table 1: Summary of Demographics

Each record corresponds to an entry in a patient’s “diabetes diary”, which includes the meal associated with the record meal, a time stamp (date and time), the blood glucose value BG, the grams of carbohydrates consumed CHO, and the units of bolus (resp., basal) injected bolus, (resp., basal). The patients also entered the anticipated level of exercise using the non-numeric values {“less than normal”, “normal”, “active”, “very active”}. We converted these into numeric values (, , and respectively) for use by standard learning algorithms.

As was mentioned, 16 of the patients in this study used insulin pumps, which each directly infuse insulin from a reservoir, via a catheter, just under a patient’s skin at a basal rate. Moreover, they also self-inject larger amounts of bolus insulin when a patient ingests carbohydrates (as a patient would with an insulin pen http://www.diabetes.org/living-with-diabetes/treatment-and-care/medication/insulin/how-do-insulin-pumps-work.html). Each record of each insulin pump patient includes the basal infusion rate value PV, in . The insulin pump settings work by partitioning the 24h clock into intervals, and setting a particular delivery rate of insulin for each interval. The PV values for any specific record was then set to the insulin delivery rate for the interval containing the record’s time stamp We also computed two other features: , which is the elapsed time since the previous record (Actually, is based on previous bolus and CHO values; see Appendix A.1) and “Insulin on Board” IOB, which captures the effect of any insulin remaining in a person’s system from previous injections [27]. This was based on the following pairs of elapsed time and percentage of post-injection insulin remaining [28]

: (1.66 hours, 78%), (2.5 hours, 48%), (3.33 hours, 27%), (4.15 hours, 12%), (5 hours, 3%). We then used a simple spline to interpolate these values; see Fig 

1. Table 2 formalizes all of these features and Table 3 provides example data.

Fig 1: Spline of Insulin on Board over Time.

2.2 Data Preprocessing

The first step in analyzing the data was dealing with the missing or erroneous (unreasonably low) values. We discarded any record that did not have an associated BG value (572 records). This was necessary as we cannot evaluate a model on records where we do not have a ground truth for BG. We also discarded any records that had missing dates (6 records) because these timestamps are integral to deriving features from the data. Second, we changed any BG value less than 1 mmol/L (8 records) to 1 mmol/L, as glucose meters are simply not accurate at these low values, other than to state that the values are very low. We then log-transformed these blood glucose values for all our predictors, anticipating that this log-linear model would have better performance. This means that after this model makes a prediction, we must use simple exponentiation to transform that prediction back into the original interpretable units.

To address missing bolus insulin and CHO

 carbohydrate values, we imputed average values into the missing entry (variants for this step are described in Section 

2.4). This was done on a per-subject, per-meal basis – that is, we imputed an individual’s average value for a particular meal. For example, say a specific patient injected, on average, 3 units of bolus insulin before breakfast. Whenever she does not enter the before-breakfast insulin, we replace that missing value with “3 units”. When dealing with missing exercise values, we imputed the “normal” value. For missing basal insulin values, we always imputed a constant value of 0. This allows a learner to distinguish when basal insulin was recorded and when it was not. After this preprocessing, we computed the auxiliary features (, IOB) from the improved data. We describe the complete set of features in Table 6, and we show example records as columns in Table 7; see also Appendix A.1.

-2.25in0in

Table 2: Description of Original Features, and some Computed Features, used in this Study
meal The time of day: { Before Breakfast, After Breakfast, Before lunch, After Lunch, Before Supper, After Supper, Before Bed, During the Night}
date The date as year-month-day
time The time as hour:minute:second
BG The BG value at the current time ()
CHO The amount of carbohydrates ingested (grams)
bolus The amount of insulin injected (units)
basal The units of background insulin injected
EV Numeric encoding of exercise value:
PV Pump Value: The rate at which the insulin pump is infusing (). This is always 0 if the patient does not have a pump.
The elapsed time since last record
IOB Insulin on Board: Estimated residual insulin from the previous injection ()

See text for further description of these terms. Note this is a simplified set of features; see Table 6 in the Appendix for the complete set of feature descriptions.

-2.25in0in index 27 28 29 30 31 32 meal Before Breakfast After Breakfast Before Lunch After Lunch Before Dinner After Dinner date 2015-11-25 2015-11-25 2015-11-25 2015-11-25 2015-11-25 2015-11-25 time 08:36:00 10:19:00 12:19:00 15:35:00 18:42:00 20:11:00 BG 16.2 14.7 5.6 6.8 10.5 3.0 CHO 30.0 0 30.0 0 15.0 0 bolus 10.4 0 3.0 0 3.8 0 basal 0 0 0 0 0 0 EV 4 4 4 4 4 4 PV 0.50 0.50 0.63 0.45 0.90 0.90 540 103 120 196 187 89 IOB 0.00 7.90 3.61 0.89 0.81 3.35

Table 3: Example of Data, over a single day, from Patient 16

Note this is a simplified version of the data; Table 7 in the Appendix provides the general, complete set of features.

2.3 Subset of only “Expert Predictable” Entries

As our data was collected voluntarily from patients at their own convenience, sampling intervals are not uniform, and the relevant data is not recorded for every meal. This is problematic for our predictive task as blood glucose values are more difficult to predict as more time elapses between readings. To address this issue, our clinician co-author (E.A.R.) established the following criteria of when it is reasonable to predict the next glucose value; the BG is “expert predictable” (EP) at a given time if all of the following are true:

  1. The preceding record is not a hypoglycemic event ( Note that this is difficult to predict due to potential glucose counterregulation effects [29] and the uncertainty in BG that follows from a physiological response to hypoglycemia).

  2. The blood glucose reading is present for the preceding meal. For example, to make a prediction about a patient’s blood glucose value before lunch, a record detailing his/her previous breakfast must be available.

  3. Six of the last eight days prior to a prediction must have records for both the current meal time and the previous meal time. For example, to predict the blood glucose before lunch, six of the last eight days must have both “before lunch” and “after breakfast” entries, to help capture this “after breakfast to before lunch” transition pattern.

Fig 2: Records Meeting the EP Criteria From Section 2.3. Patients are sorted by descending total numbers of records. See Borle [25] for further details.

Fig 2 shows the number of records from each patient that qualify as EP – the number of records for which our expert would feel comfortable making predictions.

Later, we trained and evaluated 13 models using the entire dataset [New](called { D13, D14, …, D25})(called { 1, 2, …, 13}) as well as 13 corresponding models that were trained and then evaluated, using only the records that met the expert’s EP criteria, [New](called { D0, D1, …, D12 })(called { 1, 2, …, 13 }). [New]Note that the and notation reflects that a dataset was derived from either all the data points () or from the expert’s subset ().

2.4 Feature Engineering

Table 2

shows the basic features used to describe each event. Additionally, we also considered many other feature sets to see if any could lead to better performance. Some of the variants completed records that were missing entries for carbohydrates or bolus insulin, which others removed those deficit records. Some added in the day of the week as an integer feature or as a one-hot encoded feature (

http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html

), while others removed the “basal insulin” feature. A few variants included non-temporal patient characteristics: age, gender, height and weight. Some replaced the set of features with just the first 4 principal components (obtained by principal component analysis, PCA).

Our “Kok Features” variant uses computed features similar to Kok [2], and subsequently used by Baghdadi et al. [6] and Zainuddin et al. [7]. Unlike Kok’s data, however, we do not have stress level values in our data and were therefore unable to incorporate that feature.

For any given dataset variant, some models have components that train on different subsets of the data. In addition, we also use models that include components that are trained on the data from all patients other than the current test patient, as well as models that involve sub-models that are each trained only on data from one meal type (e.g., before breakfast).

Note we consider each of these variants applied to the original dataset, and also to the reduced “EP-filters” datasets; see Section 2.3. Note that this reduces the predictions that our system attempts. For example, we do not attempt to predict a before-lunch BG value if there is no preceding after-breakfast reading. For each variant, we only considered the subset of the records that belonged to that variant, both for producing the model and also for estimating the quality of that model – in particular, the thirteen “EP”-models were trained and tested on the set of records.

Table 8 (in Appendix A.1) describes all our dataset variants.

2.5 Machine Learning Algorithms

[New]Removed section on describing the standard ML framework and first subsection heading

This work considers twelve different learners, based on seven base learning algorithms, each run on each of the various different datasets (differing based on the feature preprocessing used; see Section 2.4

K-Nearest Neighbors (KNN), Support Vector Regression (SVR), Artificial Neural Networks (ANN), Wavelet Neural Networks (WNN), Ridge Regression (RR), Random Forest Regression (RFR) and Gaussian Process Regression (GPR). We used patient#16’s first 3260 diabetes diary entries from dataset D21 to tune the hyper-parameters for the different base learners –

e.g.

, for the GPR model (nugget = 0.25), KNN model (K = 10, weighting = uniform), RF model (maximum depth = 4), and our neural network model (batch size = 20, epochs = 1000). These 3260 records were then excluded from our testing data to reduce the risk of overfitting. All other unspecified parameters were defaults –

e.g., the linear SVR model (C = 1), SVR with RBF kernel model (C = 1, ) and Ridge regression model ( = 1) are the defaults provided by scikit-learn [30]

. The ANN architecture included one output neuron with linear activation and two hidden layers of

with rectified linear activation. The WNN architecture included one output neuron with linear activation and one hidden layer of neurons with Gaussian wavelet activations (). Most of these models were implemented with the help of scikit-learn [30], except for the ANN

that was implemented using Keras 

[31] and the WNN that was implemented in part with scikit-neuralnetwork
(http://scikit-neuralnetwork.readthedocs.io/en/latest/index.html). We assume the reader is familiar with these fairly standard learners.

We also combined base learners to develop more complex learners. The following section (Section 2.5.1) describes our GPR ensemble approach. We also considered another approach, which incorporates the information from the other patients – e.g., use patient histories #1 to #46 to help train a model for patient #47. This “Stacking” approach first trains a model on the auxiliary patients, then runs this model on the test patient’s data to produce a new feature for each meal – i.e., a 14th feature, to augment the 13 features shown in Table 7. Appendix A.6 shows the entire stacking process.

2.5.1 Modeling with a confidence weighted GPR Ensemble,

For each patient, our “GPR ensemble” model first creates two different GPR models, then combines them into a single model called . The first of these two models, , learns from the entirety of a patient’s training data. The second model, , is actually a collection of GPR models – one for each possible meal category (corresponding to meal in Table 2. Each of these models is trained using only the occurrences of that particular meal category in the patient’s training data (e.g., all occurrences of “Before Lunch”). Once we have obtained and the set of models and wish to make a prediction for instance , we produce a weighted prediction of the form

(2)

where and , and where and

are respectively the standard deviations of the posterior Gaussian distributions

and at the point .

2.6 Model Evaluation

[New] To assess the performance of our models in general, evaluation functions are required to measure the quality of model predictions with respect to known true outcomes. In this paper we report our results in terms of “-loss” (), “relative -loss” () and Root Mean Squared Error (RMSE).

Given a model and a dataset

(3)

where each provides the “temporal information” shown in Equation 1, [New]we considered both the “-loss” (), “relative -loss” ()[New], and RMSE are defined as

(4)
(5)
(6)

where BG is the blood glucose associated with the next time point, occurring minutes later [New](Note that Del Favero et al. [23] refers to and as MAD and MARD, respectively).

[New]For each of these three metrics we also consider “glucose-specific” variants. These “glucose-specific” variants use a “Clark Error Grid inspired penalty function” to re-weight the relative costs of different mispredictions (see Del Favero et al. [23] for descriptions and definitions). Therefore in total, we consider the performance of our models across 6 different evaluation functions (See Appendix A.3 for why we include both and ).

[New]Moved the Baseline model description here

2.6.1 Baseline: Using a Naive Predictor

To establish a baseline for evaluating these learned models, we created a naive model () that, for each patient, simply predicted that patient’s average BG value (over all meals/records) based on his/her diabetes history – that is, the naive model predicted the same average value (for that patient), independent of any other information about that patient. More concretely, given a patient’s data , partitioned into a training set and a test set , the model calculates the average blood glucose for the entire training set

(7)

including readings for all meals and all days. Then for all instances in the associated test set , this model sets . So, for patient : as the BG value for her first training set was 8.4, this trivial model predicts that her blood glucose value will be 8.4 for each meal in the associated test set. We can then evaluate this trivial model using Equations 4,  5 and  6; we clearly hope that the less-trivial models will do significantly better.

2.6.2 10-Fold Cross Validation

Fig 3: Illustration of 5-Fold CV with Contiguous Segments. In each CV iteration training is done on the blue segments and testing on the green segment.

Each of our learners will take the entire dataset and produce a model. The next challenge is evaluating this learned model. To evaluate the predictive quality of each learner, we use 10-fold cross validation (CV), with respect to each patient. We first partition time series history of a patient, denoted , into ten contiguous segments . We then use nine of the ten segments for training in each CV round and use the remaining one segment for testing – so the first split would be train and test . While the testing partition always consists of contiguous data, the training partition will not always be completely contiguous. Fig 3 provides a visualization of what it means to partition time series data into contiguous segments for the purposes of cross validation – for simplicity, here we show “5-fold CV” rather than 10.

3 Results

3.1 Cross Validation Results

In this section we will discuss the results of our models when evaluated with cross validation on the dataset variants created. Again, 12 different learners were evaluated (derived from 7 base learners) and 26 different dataset variants ([New]D0 through D251 through 13, and 1 through

13) were created for the purpose of this analysis. For each of the 47 patient histories not used for hyperparameter selection (described in Section 

2.1), we perform 10-fold CV using different learner/dataset-variant combinations to determine their effectiveness and how well they compare to the baseline model, from Section 2.6.1. Appendix A.2 provides details about the models, and Appendix A.4 provides heat-maps that show both the performance of models on different datasets (in terms of and ), as well as the improved performance relative to . [New]Appendix A.5 also shows the performance of these learned models in terms of other measures.

Fig 4: Model : Our GPR ensemble’s predictions on data from patient 16.

For each learner and dataset variant pair, we compute the error as a micro-average over all the records of each patient. These results allow us to determine the pair with the lowest average , over all 47 patient histories: These studies found that had the lowest on average across all of the 26 different preprocessing variants of the data. On dataset [New]D01 (the preprocessing variant with the lowest average across all models), ’s average was 2.91 mmol/L, while ’s average was 2.70 mmol/L – i.e., our best model saw an improvement of only 7.1% relative to the baseline!

To help understand why the improvement is not greater, Fig 4 shows the predictions of for the processed entries from patient#16 that were used for selecting hyperparameters. Here, we can see that the model is unable to account for the high amount of variance present in the BG records for this patient.

Fig 5: Model : average as a function of BG variance, for all patient histories.

Fig 5 plots the variance in each patient’s BG history and the corresponding patient’s loss (Equations 4) that was able to achieve. This figure shows that the variance of a patient’s blood glucose was highly correlated with the test loss ( Pearson Correlation).

Fig 6: Model : average as a function of the # of diabetes diary entries for a patient, for all patient histories.

Fig 6 is a scatter plot of loss as a function of the number of data points that were available for each patient in the dataset. This figure suggests that there is no relationship between how well the model performs (in terms of test loss) on any particular patient, and how many data points were collected from that patient – Pearson Correlation: -.

3.2 Other Evaluation Measures

[New]This section heading is new, summarizing the various other evaluation measure. We then considered the loss and found that ’s loss on [New]D01 was 0.361. We also saw that achieved the best , although this was on a different dataset-variant. The best [model, dataset-variant] pair was on dataset [New]D911, which achieved an average error of 0.348; this was an improvement of 19.0% relative to the baseline of 0.430. Note that achieved an of 2.78 mmol/L dataset [New]D911.

[New]New Paragraph In addition to our and metrics we also report RMSE and the glucose-specific versions of these metrics in Table 5. Interesting the best model across all metrics was , and the best dataset preprocessing variant was either 1 or 11 (both variants).

-2.25in0in Metric Naive Model Error Best Model Error Percent Improvement Best Model Best Dataset Variant RMSE 3.58 3.47 2.98% 1 (MAD) 2.91 2.70 7.12% 1 (MARD) 0.430 0.348 18.97% 11 gRMSE 5.01 4.86  2.95% 1 gMAD 5.31 4.98  6.28% 1 gMARD 0.783 0.648 17.28% 11

Table 4: Performance of the best models across all metrics on Datasets.

Note that these error values are micro-averages. These are selected entries from the tables in Supporting Information Section A.5.

-2.25in0in Metric Naive Model Error Best Model Error Percent Improvement Best Model Best Dataset Variant RMSE 3.67 3.58 2.49% 1 (MAD) 2.96 2.78 6.14% 1 (MARD) 0.422 0.355 15.88% 11 gRMSE 5.16 5.02 2.70% 1 gMAD 5.45 5.14 5.70% 1 gMARD 0.772 0.660 14.50% 11

Table 5: Performance of the best models across all metrics on Datasets.

Note that these error values are micro-averages. These are selected entries from the tables in Supporting Information Section A.5.

4 Discussion

Our results show that our best learning algorithm is more accurate than a naive baseline – but only slightly – and that it can only achieve an average -loss of approximately 2.70 mmol/L. This loss means that, on average, if the patient’s blood glucose was normal (e.g., 6 mmol/L), the learned model may incorrectly identify the patient as either hypoglycemic (as mmol/L) or hyperglycemic ( mmol/L). Together with the strong relationship between glucose variance and prediction error, this highlights how challenging it is to create models that produce fine-grained blood glucose predictions when only using diabetes diary entries – i.e., using only the information that is commonly available to medical practitioners.

Having tried 312 different combinations of learners and dataset variants, and observing minimal differences in their performance, it seems unlikely that another [learner, dataset-variant] combination would be better. Note that these limited results include models that use data from multiple patients, which suggest that simply including more patients in the study is not likely to improve model performance. Moreover, since the model accuracy did not seem to improve as the number of recorded entries increased, we suspect that simply collecting more of these entries for each individual will not improve model performance.

There are many possible reasons why modeling T1D glucose levels based on this standard type of diabetes diary data is so challenging. It is possibly just an artifact of our study: perhaps many of the patients who volunteered, did so because their diabetes was difficult to manage. Another reason could be that inaccuracies and omissions of variables in data prevent the model from producing accurate predictions. These omissions could possibly include: not knowing the site where the bolus insulin was injected, how much scar tissue was present at the injection site, skin temperature, how accurately the carbohydrate value was recorded, the accuracy of the recorded insulin dose, the levels of different hormones, whether the patient was menstruating, stress levels, accuracy of recording exertion, insulin age or storage conditions, amount of blood flow at the injection site and likely yet other factors. Given our belief that training more accurate models will require additional relevant variables, future research might incorporate more confounding variables, such as injection location [32], glucagon levels [33] and/or meal protein/fat content [34]. However, it is not clear which, if any, of such variables are sufficient to explain the response, nor whether they can be practically captured in a clinical setting.

5 Conclusion

This work explored the challenge of accurately predicting future blood glucose values in Type I diabetes patients, based on various models learning using several machine learning algorithms. Our extensive explorations – involving 12 different learning algorithms, and 26 different encodings of the data (312 combinations) – found that, on average, the model with the lowest expected was a confidence weighted Gaussian process regression model (). Using 10-fold cross validation on 29 601 blood glucose records from 47 different patients, our model performed only better than the naïve “mean predicting” model (

). Anecdotally, a diabetologist also attempted to do this task – predicting the BG for the next meal. We found that this model’s predictions (insignificantly) outperformed the diabetologist’s, in terms of a simple unbiased loss function, but that the diabetologist performed (insignificantly) better when the evaluation was biased toward predicting hypoglycemic events; see Borle 

[25].

These results showed that our model could achieve an expected absolute error of mmol/L [New](48.65 mg/dl), which is disconcertingly large given that this is based on the type of data that is frequently collected and used for clinical practice (records are collected at meal times by the patients themselves). These results strongly suggest that the standard data collected by T1D patients, while apparently appropriate for clinical treatment of T1D, is not sufficient for accurately predicting blood glucose levels. We conjecture that using patient data that is sampled more frequently and that includes additional features would improve both the ability of professionals and machine learning practitioners to more accurately predict patient’s blood glucose levels, but there is a practical trade-off between patient convenience and highly detailed record keeping.

Appendix A Supporting Information

a.1 Dataset Preprocessing Variants and Example

As mentioned earlier, the features that we described in Section 2.2 are simplified so that they can be easily understood. This appendix describes the complete set of features that we used.

The previously described and BG features implicitly assume that the previous event involved both injecting a quantity of insulin and also consuming some carbohydrates. While this is typically the case, there are exceptions. To accommodate such situations, where an event involves only one of these, we decompose each of these terms into two different features: is separated into and , and BG is separated into BG and BG. These correspond to the time since most recent (previous) bolus injection (when bolus bolus units were injected and the blood glucose was BG), and the time since the most recent previous carbohydrate consumption (when the subject consumed CHO and his/her blood glucose was BG).

Table 6 provides the complete set of features that we used and Table 7 provides the correct version of Patient#16’s data from Table 3 with these features.

-2.25in0in meal The time of day: { Before Breakfast, After Breakfast, Before lunch, After Lunch, Before Supper, After Supper, Before Bed, During the Night} DOW The day of the week xV Numeric encoding of exercise value: PV Pump Value: The rate at which the insulin pump is infusing (). This is always 0 if the patient does not have a pump basal The units of background insulin injected BG The BG value at the current time () IOB Insulin on Board: Estimated residual insulin from the previous injection () CHO The previous most recent amount of carbohydrates ingested (grams) bolus The previous most recent amount of insulin injected (units) BG The BG value at the time that CHO was ingested () BG The BG value at the time that bolus was injected () The time between CHO and BG (min) The time between bolusand BG (min)

Table 6: Description of Original and Processed Features used in this Study

-2.25in0in index 27 28 29 30 31 32 meal Before Breakfast After Breakfast Before Lunch After Lunch Before Dinner After Dinner DOW Tuesday Tuesday Tuesday Tuesday Tuesday Tuesday EV 4 4 4 4 4 4 PV 0.50 0.50 0.63 0.45 0.90 0.90 basal 0 0 0 0 0 0 BG 16.2 14.7 5.6 6.8 10.5 3.0 IOB 0.00 7.90 3.61 0.89 0.81 3.35 CHO 17.5 30.0 20.5 30.0 18.5 15.0 bolus 1.93 10.40 2.44 3.00 2.54 3.80 BG 10.3 16.2 14.7 5.6 6.8 10.5 BG 10.3 16.2 14.7 5.6 6.8 10.5 540 103 120 196 187 89 540 103 120 196 187 89

Table 7: Example of Processed Data, over a single day, from Patient 16 (Variant D1)

Note that will differ from and will differ from whenever carbohydrates and insulin were not taken at the same time.

Table 8 lists all of the dataset variants that are generated by the various feature-creation/selection steps. Note the first 13 datasets ([New]D0 - D121 - 13) contain only the subset of records that satisfy the EP criteria – see Section 2.3. The second 13 rows ([New]D13 - D251 - 13) deal with, essentially, the entire set of records. The “Missing Carbs” and “Missing Bolus” columns address variants in handling missing data, “PCA Transformation” indicates if the features were transformed using PCA, the “Kok features” column indicates datasets that use the features described by Peter Kok and the remaining columns indicate whether specific other features are included.

-2.25in0in [New]On this table, and below: renamed the datasets to reflect new naming conventions, and reordered datasets, based on updated results. # of Subjects Records Predicted EP Rules DOW Features Basal Feature Patient Specific Features Kok Features PCA Transform Missing Carbs Missing Bolus 1 42 7378 1 1 1 0 0 0 Throwout Impute Mean 2 39 5978 1 1 1 0 0 0 Throwout Throwout 3 47 16167 1 1 1 0 0 0 Impute Mean Impute 0 4 47 16167 1 1 1 0 0 0 Impute 0 Impute Mean 5 47 16167 1 7 1 1 0 0 Impute Mean Impute Mean 6 47 16167 1 1 1 0 0 0 Impute Mean Impute Mean 7 47 16167 1 7 1 0 0 0 Impute Mean Impute Mean 8 47 16167 1 0 0 0 0 0 Impute Mean Impute Mean 9 37 7349 1 0 0 0 1 0 N/A N/A 10 47 16167 1 1 1 0 0 0 Impute 0 Impute 0 11 45 10623 1 1 1 0 0 0 Impute Mean Throwout 12 47 16167 1 0 0 0 0 1 Impute Mean Impute Mean 13 37 7349 1 0 0 0 1 1 N/A N/A 1 42 15961 0 1 1 0 0 0 Throwout Impute Mean 2 39 13832 0 1 1 0 0 0 Throwout Throwout 3 47 24896 0 1 1 0 0 0 Impute Mean Impute 0 4 47 24896 0 1 1 0 0 0 Impute 0 Impute Mean 5 47 24896 0 7 1 1 0 0 Impute Mean Impute Mean 6 47 24896 0 1 1 0 0 0 Impute Mean Impute Mean 7 47 24896 0 7 1 0 0 0 Impute Mean Impute Mean 8 47 24896 0 0 0 0 0 0 Impute Mean Impute Mean 9 37 11888 0 0 0 0 1 0 N/A N/A 10 47 24896 0 1 1 0 0 0 Impute 0 Impute 0 11 45 19122 0 1 1 0 0 0 Impute Mean Throwout 12 47 24896 0 0 0 0 0 1 Impute Mean Impute Mean 13 37 11888 0 0 0 0 1 1 N/A N/A

Table 8: Datasets Generated from Different Feature Sets.

Here, “Basal feature” and “Patient Specific Features” are features that were included (1) or excluded (0) from datasets. “DOW Features” indicates if the day of the week was not included (0), included (1), or included as a one hot encoded feature (7). “PCA Transform” indicates whether the data was reduced to 4 principle components. “Kok Features” means that the data was preprocessed to replicate (as best as possible) the features used in Kok’s MSc thesis [2]. In the final two columns, the value “Throwout” means that these records were removed from the dataset. The “# of Subjects” column shows that some datasets did not include all (47) patients. In these cases, patients were excluded because they had too few records (under 100) after the preprocessing steps were applied to their data. The table is partitioned so that datasets with the EP rules ([New]D0 – D121 – 13) precede their corresponding datasets without EP rules ([New]D13 – D251 – 13).

a.2 Model Variants

-2.25in0in name Symbol Algorithm Confidence Weighting Stacking gpr_be GPR 1 0 gpr_be_AllPat_AllMeals GPR 1 1 gpr_IndPat_AllMeals GPR 0 0 gpr_AllPat_AllMeals GPR 0 1 svr1 SVR (RBF Kernel) 0 0 svr1_lin SVR (Linear Kernel) 0 0 svr1_allpats SVR (RBF Kernel) 0 1 rf4 Random Forest 0 0 KNN10U KNN 0 0 ridge Ridge Regression 0 0 wnn Wavelet Neural Network 0 0 NN Feed-Forward NN 0 0 naive BG History Average 0 0

Table 9: Descriptions of the Different Learners Used.

Confidence Weighting is explained in Section 2.5.1 and Stacking is explained in Appendix A.6.

a.3 Inclusion of in addition to

[New]Content moved here from the section on evaluating our models

While is the standard loss function, it can be problematic: Note the [predicted, true] pair and each have an of 2 – i.e. – but predicting 5 mmol/L instead of 3 mmol/L is potentially much more dangerous, in terms of patient health, than predicting 10 mmol/L instead of 12 mmol/L. Here, the function would correctly impose a larger penalty to the first versus the second . See Fig 7 for a visualization of the function, showing that this error is especially large in the dangerous situation when the true value BG is small but the predicted value is large.

Fig 7: Target Loss Function. This function emphasizes loss when the model fails to predict hypoglycemic events. Here, “True BG” refers to BG and “Predicted BG” refers to . Note that the error is large when the true BG is very low, but the predicted is relatively high.

a.4 Experimental Results: Heat-Maps

This sub-appendix provides a set of heat-maps (Figs 8,  9,  10 and  11). The left half of each heat-map contains the datasets that adhere to our EP rules, while the right half contains those datasets that do not. Models (on the y axis) are sorted in terms of their average error (over all 26 datasets), so that the model with the best average error across all datasets appears at the top of these figures. Further, datasets [New]D0 to D121 to 13 and [New]D13 to D251 to 13 (in each half of these heat maps) are sorted horizontally, in increasing order of average error across all models, with the left most dataset in each half having the smallest error. In Figure 8, the best model/dataset combination is found in the top left corner. More detailed results for these heat-maps are seen in Borle [25].

Fig 8: Average L1 Loss: Datasets vs. Models. Datasets using the EP criteria are on the left half of the bisecting white line. Each square represents the cross-validation L1 error, micro-averaged over patients.
Fig 9: Percent Improvement in Average L1 Loss for Models vs. Baseline. Datasets using the EP criteria are on the left half of the bisecting white line. Each square corresponds with the percent change between the corresponding result in Figure 8 and the performance of .
Fig 10: Average Relative L1 Loss: Datasets vs. Models. Datasets using the EP criteria are on the left half of the bisecting white line. Each square represents the cross-validation relative L1 error, micro-averaged over patients.
Fig 11: Percent Improvement in Average Relative L1 Loss for Models vs. Baseline. Datasets using the EP criteria are on the left half of the bisecting white line. Each square corresponds with the percent change between the corresponding result in Figure 10 and the performance of .

a.5 [New]Experimental Results: Tables

-2.25in0in 1 2.7 2.71 2.74 2.75 2.74 2.8 2.81 2.8 2.86 2.87 2.88 3.29 2 2.79 2.79 2.82 2.83 2.83 2.88 2.9 2.88 2.91 2.93 2.95 3.8 3 2.78 2.79 2.81 2.82 2.84 2.86 2.89 2.89 2.96 3.0 3.01 3.22 4 2.79 2.79 2.82 2.82 2.84 2.86 2.9 2.89 2.97 3.0 3.01 3.37 5 2.81 2.82 2.85 2.86 2.84 2.88 2.9 2.9 2.96 2.99 3.0 3.14 6 2.78 2.78 2.81 2.81 2.83 2.84 2.89 2.88 2.96 2.98 3.01 3.2 7 2.81 2.82 2.85 2.86 2.84 2.9 2.9 2.9 2.96 2.99 2.99 3.16 8 2.79 2.79 2.82 2.82 2.84 2.84 2.9 2.9 2.97 3.0 3.0 3.44 9 2.82 2.82 2.84 2.84 2.84 2.94 2.9 2.99 2.94 2.93 2.95 3.9 10 2.82 2.83 2.86 2.86 2.87 2.91 2.92 2.92 2.98 3.0 3.02 3.31 11 2.78 2.79 2.84 2.85 2.85 2.89 2.93 2.92 2.99 3.05 3.05 3.61 12 2.84 2.84 2.87 2.88 2.88 2.87 2.93 2.93 2.99 3.03 3.0 3.64 13 2.82 2.82 2.83 2.84 2.88 2.9 2.89 2.98 2.95 2.94 2.96 29.04 1 2.78 2.78 2.83 2.83 2.82 2.9 2.9 2.89 2.94 2.97 2.99 3.58 2 2.86 2.86 2.9 2.9 2.91 2.97 2.97 2.96 3.0 3.03 3.05 3.83 3 2.89 2.9 2.92 2.93 2.94 2.97 3.0 3.01 3.07 3.12 3.12 3.47 4 2.9 2.9 2.93 2.93 2.93 2.98 3.01 3.02 3.07 3.11 3.13 3.48 5 2.92 2.93 2.96 2.97 2.93 3.01 3.03 3.02 3.07 3.1 3.12 3.35 6 2.89 2.89 2.92 2.92 2.93 2.96 3.01 3.01 3.07 3.1 3.12 3.61 7 2.92 2.93 2.96 2.97 2.93 3.02 3.03 3.02 3.07 3.1 3.11 3.38 8 2.89 2.9 2.92 2.93 2.93 2.95 3.02 3.02 3.07 3.11 3.12 3.77 9 2.87 2.87 2.89 2.89 2.91 3.0 2.97 3.09 3.0 3.0 3.0 4.14 10 2.93 2.93 2.96 2.96 2.97 3.02 3.03 3.04 3.08 3.12 3.15 3.53 11 2.9 2.9 2.95 2.96 2.96 3.0 3.02 3.1 3.12 3.16 3.17 4.22 12 2.94 2.94 2.97 2.97 2.96 2.98 3.03 3.05 3.09 3.13 3.12 4.7 13 2.87 2.87 2.89 2.89 2.94 2.96 2.95 3.06 3.01 3.0 3.01 6.1

Table 10: Losses Corresponding to Fig 8.

See Table 8 for descriptions of the different datasets and Table 9 for descriptions of the models.

-2.25in0in 1 7.12 7.02 5.75 5.54 5.95 3.73 3.58 3.77 1.8 1.45 1.11 -13.15 2 6.19 6.14 5.02 4.8 4.62 2.93 2.28 3.18 1.97 1.48 0.85 -27.86 3 7.98 7.85 6.96 6.83 6.03 5.51 4.43 4.6 1.98 0.88 0.56 -6.35 4 7.79 7.74 6.82 6.75 6.07 5.47 4.26 4.44 1.66 0.87 0.39 -11.52 5 7.01 6.82 5.86 5.59 6.13 4.61 4.02 4.27 2.19 1.05 0.69 -3.69 6 8.18 8.02 7.23 6.98 6.34 6.09 4.38 4.62 2.24 1.3 0.48 -5.91 7 7.01 6.76 5.86 5.57 6.2 4.25 4.02 4.27 2.19 1.19 0.98 -4.47 8 7.93 7.8 6.91 6.73 6.04 6.27 4.06 3.99 1.9 0.73 0.94 -13.7 9 4.2 4.15 3.5 3.49 3.39 0.11 1.42 -1.55 0.01 0.16 -0.23 -32.75 10 6.65 6.48 5.57 5.38 5.06 3.62 3.33 3.51 1.33 0.75 0.09 -9.39 11 9.94 9.72 8.04 7.61 7.61 6.47 5.15 5.33 3.11 1.09 1.23 -16.83 12 6.27 6.03 5.0 4.71 4.96 4.99 3.17 2.85 1.29 -0.13 0.72 -20.35 13 4.1 3.99 3.7 3.53 2.15 1.19 1.8 -1.24 -0.4 -0.18 -0.75 -887.75 1 6.14 6.08 4.63 4.49 4.79 2.14 2.13 2.37 0.9 -0.18 -0.73 -20.68 2 4.97 4.9 3.69 3.53 3.31 1.27 1.31 1.59 0.5 -0.53 -1.49 -27.2 3 7.02 6.97 6.1 6.0 5.62 4.57 3.54 3.27 1.42 -0.06 -0.26 -11.37 4 6.9 6.86 5.91 5.83 5.78 4.42 3.34 2.97 1.35 -0.01 -0.52 -11.91 5 6.07 5.93 4.84 4.61 6.04 3.33 2.83 2.84 1.49 0.28 -0.28 -7.63 6 7.23 7.07 6.34 6.1 5.96 4.96 3.43 3.27 1.51 0.38 -0.12 -15.97 7 6.07 5.9 4.84 4.59 6.05 2.89 2.83 2.84 1.49 0.35 0.06 -8.41 8 7.13 6.93 6.22 5.97 5.84 5.4 3.13 2.99 1.33 -0.01 -0.03 -20.94 9 4.21 4.2 3.29 3.25 2.8 -0.19 0.56 -3.19 -0.22 -0.15 -0.16 -38.48 10 5.95 5.85 4.92 4.79 4.73 3.05 2.72 2.46 0.93 -0.17 -1.19 -13.24 11 7.91 7.81 6.26 5.99 6.17 4.73 4.22 1.68 1.1 -0.39 -0.74 -34.05 12 5.64 5.53 4.66 4.49 4.8 4.47 2.55 2.15 0.75 -0.59 -0.27 -51.06 13 4.11 4.0 3.51 3.36 1.62 1.1 1.47 -2.33 -0.69 -0.46 -0.61 -103.85

Table 11: Percentage Improvements Corresponding to Fig 9.

See Table 8 for descriptions of the different datasets and Table 9 for descriptions of the models.

-2.25in0in 1 3.61 3.61 3.65 3.64 3.66 3.76 3.81 3.81 3.9 3.95 3.88 4.62 2 3.62 3.62 3.68 3.65 3.69 3.77 3.78 3.82 3.89 3.97 3.9 4.6 3 3.67 3.67 3.71 3.75 3.72 3.83 3.91 3.88 3.99 4.0 3.99 4.28 4 3.67 3.68 3.72 3.75 3.72 3.84 3.92 3.88 3.99 4.02 3.99 4.3 5 3.71 3.72 3.77 3.74 3.78 3.85 3.89 3.93 3.98 4.01 4.0 4.24 6 3.66 3.67 3.7 3.74 3.72 3.83 3.89 3.85 3.96 4.03 3.99 4.27 7 3.71 3.73 3.77 3.73 3.78 3.85 3.89 3.93 3.97 4.0 4.0 4.26 8 3.67 3.68 3.71 3.75 3.72 3.86 3.91 3.85 3.99 4.02 4.02 4.37 9 3.58 3.58 3.6 3.6 3.6 3.77 3.74 3.81 3.71 3.74 3.79 4.79 10 3.73 3.74 3.78 3.8 3.79 3.87 3.94 3.98 3.99 4.03 4.02 4.41 11 3.48 3.5 3.57 3.57 3.59 3.69 3.74 3.72 3.86 3.85 3.84 4.68 12 3.75 3.76 3.8 3.8 3.81 3.89 3.95 3.93 4.03 4.0 4.05 4.73 13 3.58 3.58 3.59 3.64 3.6 3.77 3.77 3.77 3.73 3.76 3.77 6.02 1 3.64 3.64 3.71 3.68 3.72 3.81 3.89 3.9 3.91 3.93 3.95 4.73 2 3.62 3.62 3.67 3.66 3.67 3.75 3.8 3.86 3.81 3.85 3.88 4.73 3 3.75 3.75 3.78 3.79 3.79 3.9 3.98 3.97 4.03 4.06 4.05 4.66 4 3.75 3.75 3.79 3.79 3.79 3.91 3.98 3.98 4.03 4.08 4.06 4.82 5 3.79 3.79 3.84 3.79 3.85 3.92 3.96 4.02 4.02 4.07 4.08 4.45 6 3.74 3.74 3.77 3.78 3.78 3.9 3.96 3.95 4.01 4.06 4.05 4.63 7 3.79 3.8 3.84 3.78 3.85 3.92 3.96 4.03 4.02 4.04 4.08 4.48 8 3.75 3.75 3.78 3.8 3.79 3.93 3.98 3.94 4.03 4.09 4.09 4.78 9 3.63 3.63 3.66 3.69 3.66 3.88 3.81 3.87 3.78 3.84 3.88 5.35 10 3.8 3.8 3.84 3.82 3.85 3.93 4.0 4.05 4.04 4.05 4.09 4.68 11 3.55 3.56 3.61 3.6 3.62 3.73 3.78 3.78 3.83 3.88 3.84 4.73 12 3.82 3.82 3.86 3.85 3.87 3.97 4.02 4.01 4.06 4.07 4.13 11.06 13 3.63 3.64 3.65 3.72 3.66 3.86 3.84 3.82 3.8 3.85 3.85 7.52

Table 12: Losses Corresponding to Fig 10.

See Table 8 for descriptions of the different datasets and Table 9 for descriptions of the models. Here all values are multiplied by 10 so that they are the same order of magnitude as in Table 20.

-2.25in0in 1 17.53 17.45 16.52 16.76 16.33 14.19 12.9 12.99 10.96 11.37 9.68 -5.64 2 16.69 16.63 15.34 16.03 15.13 13.16 12.99 12.06 10.56 10.16 8.59 -5.95 3 17.97 17.79 16.9 16.16 16.71 14.2 12.47 13.16 10.83 10.76 10.4 4.17 4 17.81 17.73 16.79 16.15 16.72 14.04 12.3 13.13 10.73 10.78 10.07 3.82 5 16.91 16.73 15.64 16.42 15.39 13.89 13.0 12.14 10.87 10.58 10.32 5.19 6 18.16 17.95 17.16 16.34 16.86 14.21 13.01 13.76 11.31 10.79 9.93 4.5 7 16.91 16.64 15.64 16.43 15.32 13.89 13.0 11.96 11.1 10.58 10.39 4.67 8 17.85 17.7 16.92 16.07 16.73 13.59 12.45 13.77 10.69 10.16 10.12 2.34 9 12.68 12.61 12.07 12.09 12.06 7.84 8.65 7.01 9.38 7.43 8.72 -16.97 10 16.57 16.38 15.46 14.89 15.26 13.36 11.91 11.01 10.62 10.03 9.82 1.38 11 18.97 18.7 16.87 16.89 16.42 14.1 12.95 13.47 10.26 10.72 10.5 -8.95 12 16.13 15.86 15.06 14.9 14.73 12.82 11.7 12.11 9.8 9.42 10.51 -5.72 13 12.65 12.49 12.27 11.01 12.06 7.96 7.93 7.99 9.01 7.83 8.31 -46.92 1 15.85 15.83 14.3 14.9 14.17 12.11 10.15 9.93 9.79 8.86 9.22 -9.33 2 13.7 13.63 12.54 12.67 12.41 10.56 9.29 7.96 9.05 7.39 8.14 -12.83 3 16.61 16.54 15.77 15.59 15.66 13.19 11.36 11.66 10.18 9.73 9.56 -3.7 4 16.55 16.5 15.63 15.71 15.56 13.02 11.39 11.41 10.22 9.71 9.15 -7.34 5 15.68 15.54 14.52 15.67 14.31 12.76 11.85 10.45 10.48 9.26 9.36 0.96 6 16.84 16.65 16.03 15.85 15.76 13.17 11.84 12.02 10.71 9.75 9.58 -3.01 7 15.68 15.51 14.52 15.91 14.28 12.76 11.85 10.18 10.62 9.26 10.0 0.23 8 16.64 16.43 15.83 15.36 15.59 12.6 11.44 12.2 10.29 9.07 9.0 -6.37 9 12.88 12.89 12.11 11.57 12.1 6.83 8.53 7.19 9.21 6.86 7.8 -28.32 10 15.47 15.35 14.51 14.86 14.38 12.45 11.04 9.85 10.0 9.02 9.74 -4.16 11 15.88 15.74 14.61 14.71 14.33 11.55 10.49 10.47 9.25 9.15 8.15 -11.98 12 15.02 14.9 14.15 14.34 13.95 11.59 10.63 10.81 9.73 7.97 9.49 -146.23 13 12.89 12.76 12.47 10.71 12.29 7.49 7.98 8.28 8.85 7.64 7.68 -80.39

Table 13: Percentage Improvements Corresponding to Fig 11.

See Table 8 for descriptions of the different datasets and Table 9 for descriptions of the models.

-2.25in0in 1 5.03 4.98 4.99 5.05 5.08 5.09 5.12 5.19 5.24 5.37 5.38 5.75 2 5.13 5.09 5.09 5.18 5.17 5.17 5.21 5.26 5.28 5.4 5.42 6.43 3 5.16 5.19 5.2 5.31 5.25 5.26 5.32 5.41 5.48 5.69 5.68 5.68 4 5.16 5.2 5.21 5.31 5.26 5.26 5.33 5.42 5.49 5.7 5.7 6.1 5 5.21 5.25 5.26 5.3 5.32 5.33 5.33 5.41 5.47 5.6 5.7 5.58 6 5.13 5.17 5.18 5.29 5.23 5.25 5.32 5.41 5.47 5.65 5.68 5.67 7 5.22 5.25 5.27 5.3 5.32 5.33 5.33 5.41 5.47 5.62 5.66 5.63 8 5.14 5.2 5.21 5.31 5.27 5.28 5.36 5.47 5.5 5.67 5.66 5.97 9 5.2 5.18 5.18 5.17 5.22 5.22 5.15 5.32 5.34 5.49 5.46 6.4 10 5.27 5.27 5.28 5.37 5.34 5.35 5.4 5.48 5.51 5.7 5.71 5.82 11 5.1 5.08 5.09 5.2 5.18 5.2 5.23 5.33 5.41 5.63 5.59 6.04 12 5.25 5.33 5.34 5.39 5.42 5.43 5.46 5.56 5.54 5.69 5.68 6.37 13 5.15 5.18 5.18 5.23 5.21 5.22 5.23 5.4 5.35 5.49 5.49 31.86 1 5.2 5.14 5.14 5.19 5.25 5.25 5.29 5.36 5.4 5.6 5.61 6.17 2 5.28 5.23 5.23 5.3 5.32 5.33 5.33 5.42 5.44 5.63 5.65 6.44 3 5.36 5.4 5.41 5.48 5.47 5.47 5.52 5.64 5.7 5.94 5.94 6.09 4 5.37 5.41 5.41 5.47 5.48 5.49 5.53 5.65 5.69 5.95 5.92 6.11 5 5.43 5.46 5.47 5.46 5.55 5.57 5.55 5.65 5.69 5.86 5.93 5.91 6 5.34 5.38 5.4 5.46 5.45 5.47 5.52 5.64 5.69 5.91 5.89 6.2 7 5.45 5.46 5.48 5.45 5.55 5.57 5.55 5.65 5.69 5.88 5.91 5.93 8 5.33 5.4 5.41 5.47 5.47 5.48 5.57 5.68 5.7 5.92 5.92 6.42 9 5.33 5.28 5.28 5.33 5.34 5.34 5.29 5.49 5.47 5.62 5.57 6.81 10 5.46 5.47 5.48 5.53 5.55 5.56 5.58 5.69 5.72 5.95 5.99 6.18 11 5.3 5.29 5.3 5.38 5.41 5.42 5.38 5.59 5.67 5.88 5.84 6.79 12 5.42 5.52 5.53 5.55 5.6 5.61 5.64 5.76 5.74 5.93 5.93 7.73