Machine learning for the diagnosis of early stage diabetes using temporal glucose profiles

05/18/2020
by   Woo Seok Lee, et al.
POSTECH
0

Machine learning shows remarkable success for recognizing patterns in data. Here we apply the machine learning (ML) for the diagnosis of early stage diabetes, which is known as a challenging task in medicine. Blood glucose levels are tightly regulated by two counter-regulatory hormones, insulin and glucagon, and the failure of the glucose homeostasis leads to the common metabolic disease, diabetes mellitus. It is a chronic disease that has a long latent period the complicates detection of the disease at an early stage. The vast majority of diabetics result from that diminished effectiveness of insulin action. The insulin resistance must modify the temporal profile of blood glucose. Thus we propose to use ML to detect the subtle change in the temporal pattern of glucose concentration. Time series data of blood glucose with sufficient resolution is currently unavailable, so we confirm the proposal using synthetic data of glucose profiles produced by a biophysical model that considers the glucose regulation and hormone action. Multi-layered perceptrons, convolutional neural networks, and recurrent neural networks all identified the degree of insulin resistance with high accuracy above 85%.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/23/2016

Artificial Neural Networks for Detection of Malaria in RBCs

Malaria is one of the most common diseases caused by mosquitoes and is a...
12/03/2016

Positive blood culture detection in time series data using a BiLSTM network

The presence of bacteria or fungi in the bloodstream of patients is abno...
08/01/2017

Application of machine learning for hematological diagnosis

Quick and accurate medical diagnosis is crucial for the successful treat...
03/15/2018

Estimation of lactate threshold with machine learning techniques in recreational runners

Lactate threshold is considered an essential parameter when assessing pe...
03/24/2020

Data-Driven Failure Prediction in Brittle Materials: A Phase-Field Based Machine Learning Framework

Failure in brittle materials led by the evolution of micro- to macro-cra...
05/22/2018

Early Cancer Detection in Blood Vessels Using Mobile Nanosensors

In this paper, we propose using mobile nanosensors (MNSs) for early stag...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Glucose homeostasis is essential to stably supply fuel to the brain GH ; GH2 . Blood glucose levels (BGLs) are tightly regulated by hormones from the endocrine pancreas (Fig. 1 (a)). Normal fasting glucose concentration is about 4 mM BGL . The American Diabetes Association Guideline defines hyperglycemia as mM. Severe hyperglycemic ( mM average at 2 h fasting) is defined as diabetes mellitus (DM) ada_diagnosing . This chronic disease contains long-term damage, dysfunction, and failure of diverse organs resulting in complications.

Figure 1: (a) Schematic diagram of glucose homeostasis. Blood glucose levels (BGLs) are regulated by insulin and glucagon secreted from endocrine systems. In the pancreas (blue boxed area), each endocrine system consists of three cell types that interact to each other. The blue graded arrow represents the diminished action of insulin, which is insulin resistance. (b) Two-hour averaged blood glucose concentration depending on the degrees of insulin resistance. The red dotted line represents the standard hyperglycemic threshold of 7.8 mM/ 1.24, where = 6.3 mM is a normal glucose concentration. The insulin resistance leads to hyperglycemia.

DM is grouped into three categories based on the origin of metabolic disorders ada_classification . Type-1 diabetes mellitus (T1DM) results from insufficient production of insulin due to the destruction of insulin-producing endocrine cells by autoimmunity. An artificial pancreas can help the patients. Type-2 diabetes mellitus (T2DM) is a result of diminished effectiveness of insulin action even though it is produced as normal. T2DM comprises 90% of the total DM patients. Gestational diabetes is a temporary condition in women who develop hyperglycemia during pregnancy.

Insulin resistance is a key component of health monitoring for a few decades. The incidence of T2DM is closely related to obesity; in which people, about of the population who have body mass index (BMI) avoids DM seidell2000 ; seidell1998 . Insulin resistance is of utmost importance in the pathogenesis of T2DM, hypertension, and coronary heart disease including syndrome X reaven1995 .

Here we evaluate machine-learning (ML) as a method to predict development of insulin resistance from the time series of BGLs. ML has been used to diagnose DM by considering various features of individuals such as age, gender, BMI, waist circumference, smoking, job, hypertension, residential region (rural/ urban), physical activity, and family history of diabetes ml2dm1

. Use of clustering algorithms (linear regression, random forest, k-nearest neighbors, and support vector machine) to evaluate those risk factors can predict whether or not subjects are diabetic. To date, most ML applications to DM have focused on finding of biomarkers 

ml2dm ; ml2dm1 . In this paper, we provide a novel insight to detect DM development by extracting the increment of insulin resistance, a critical factor of T2DM, from the time trend of BGL.

This idea has not been explored yet, because time series data of BGLs with sufficient temporal resolution are currently not available. BGLs are regulated by two counter-regulatory hormones, insulin and glucagon, secreted in a pulsatile manner with a min period. The signal of fluctuating BGLs can be regarded as an outcome of the balanced response to the insulin and glucagon. Successful probing of the signal of fluctuating BGLs requires temporal resolution that is fine enough to detect the response of the pulsatile hormones with a shorter time interval than the hormone pulses. The time resolution of the current state-of-the-art continuous glucose monitoring sensor reaches min, which is only comparable to the period of hormone pulses. Therefore, to test our proposal, we use a synthetic data of glucose profiles produced by a biophysical model jo2019 ; jo2017scirep that considers both glucose regulation and hormone action.

This paper is organized as follows. In Section II, we briefly introduce the biophysical model that produces time series data of BGLs. In Section III, we explain machine learning methods that we use in this study. In Section IV, we provide results and discussion.

Ii Data preparation of glucose time traces

To produce the data of glucose profiles depending on insulin resistance, we adopt a biophysical model that describes the glucose regulation by endocrine systems jo2019 ; jo2017scirep . Because of the importance of the metabolic disease, diabetes, many biophysical models exist in this field. Some models describe how glucose stimulates insulin secretion in cellular or organ levels gsis0 ; gsis ; gsis1 , while other models describe how glucose and insulin regulates each other gis2 ; gis3 ; gis4 ; jo2017plosone2 . Unlike the one-way response model or hormone-level description, the biophysical model we adopt formulates the closed loop between glucose regulation and endocrine systems.

The human pancreas has a few millions of islets, endocrine systems, and each islet consists of , , and cells. Insulin secreted by cells decreases BGLs, whereas glucagon secreted by cells increases BGLs. Somatostatin secreted by cells does not directly regulate BGLs, but the three endocrine cell types interact with each other. The interaction signs between , , and cells are very special [Fig. 1(a)]. Depending on the glucose concentration, the endocrine cells show biological rhythms with active/silent phases that lead to corresponding hormone secretion. The biophysical model describes the rhythmic cellular activities responding to glucose stimuli as phase oscillators modulated by environment jo2019 ; jo2020 . The model also considers interactions among endocrine cells within islets; these interactions correspond to the coupling in the oscillator model. The model was used to explain the entrainment of insulin secretion by alternating glucose stimuli in experiment jo2017plosone1 ; jo2017plosone2 .

In this study, we slightly modified the closed-loop model to consider insulin resistance. We use to represent three types of endocrine cells and to indicate the islet index. The activity (or hormone secretion) of cells in the th islet is denoted by amplitudes and phase . The dynamics of the interacting phase oscillators depends on glucose levels :

(1)
(2)

Here the model describes glucose-dependent amplitude modulations with sigmoidal functions of

and phase modulations with linear functions of (APfunctions for their specific functional forms). The spontaneous oscillations of the cellular activities have angular frequencies,

, which follows a normal distribution with a mean value of

and a standard deviation of 0.1. The coupling signs between

, , and cells follow experimental evidence: and . Note that islet cells interact only within it; they do not interact with cells located in different islets.

The total amount of hormone secretion from whole islets is then for glucagon () and insulin (). The phase shows maximal secretion, whereas shows minimal secretion. Given the fact that insulin decreases the glucose concentration , whereas glucagon increases , so the oscillator model of islets can make a closed loop for glucose regulation:

(3)

Glucose clearance by insulin is proportional to the present glucose concentration unlike glucose production by glucagon. The multiplication of a constant in the glucose production part is included to consider the balance between glucagon and insulin actions at the normal glucose concentration . In this study, we set mM. Equations (1)-(3) complete the closed-loop model for glucose regulation in the absence of external glucose stimuli jo2019 . To include the effect of insulin resistance, we introduce an auxiliary parameter in the glucose clearance part. is a reduction in the effectiveness of insulin action.

As the insulin-resistance parameter increases, BGLs of increase [Fig. 1(b)]. In particular, beyond , the 2-hour averaged BGLs of exceed 7.8 mM, so hyperglycemia is severe. Therefore, to reproduce early-stage diabetic conditions, we use five groups of . Given , we numerically solved the coupled differential Eqs. (1)-(3) for islets. Then we took 500 time steps (corresponding to 25 min) with a step size 0.05 min as a sample of glucose time traces. For each group of , we prepared 2000 samples of the BGL time series for training and 200 samples for testing. Each sample includes the group label of

, which has one-hot encoding (10000, 01000, 00100, 00010, 00001 for

, and , respectively).

Different groups of have a clear feature in the time-averaged value of BGLs for total time step . However, given real glucose time traces, one cannot judge whether the different results from the different degrees of insulin resistance or simply from individual variations. Therefore, to avoid this confusion, we focus on the temporal pattern itself rather than the shifted average level by using . We produced different samples of by solving the model with different initial conditions or by randomly selecting different time windows from full time traces (Fig. 2). The temporal patterns for different are not apparent to the eye.

Figure 2: Temporal glucose profiles under insulin resistance. Twenty one samples randomly selected (a) from training data and (b) from test data. Among the gray time series samples, one is highlighted with colors. The five rows have different degrees of insulin resistance from top to bottom. For easy comparison, the colored samples are put together in (c) and (d) for the training and test data, respectively.

Iii Pattern recognition of machine learning

Temporal pattern classification is one of the most challenging problems in ML Esling2012 with a wide range of applications in human activity recognition Bevilacqua2018 , electroencephalogram (EEG) classification Craik2019 , and speech recognition Hinton2012 ; Mohamed2012

. Here, we consider four different neural networks for their ability to classify insulin resistance from the synthesized time traces of BGLs. First, we consider a shallow neural network (ShallowNet) as a control for comparison with more sophisticated network models. Second, we use a fully-connected deep neural network called a multilayer perceptron (MLP); it is the most basic structure for deep learning. Third, we use a convolutional neural network (CNN) because it has been very successful in recognition of spatial and temporal patterns 

Krizhevsky2012 ; Fawaz2019 . Finally, we also use a recurrent neural network (RNN) Sherstinsky2018 ; RNNs were originally specialized for temporal data by considering recurrent flows in a network.

Our task is a supervised learning with inputs of temporal glucose traces,

, and outputs of five labels for insulin resistance. Thus we assigned

nodes for the input layer, and 5 nodes for the output layer. We adopt the ReLU (Rectified Linear Unit) as a basic activation function except for the output layer 

LSTM_activation . For the output layer, we use a softmax function to obtain probabilistic predictions as . For example, if , we conclude that the corresponding time trace is the first group, which has

. We use the Adaptive Moment Estimation (Adam) algorithm for the optimization of learning 

Kingma2014 . Now we specify the network structures that we used in this study.

ShallowNet. The ShallowNet consists of two hidden layers of (1024, 256) nodes for each layer.

MLP. The MLP has eight hidden layers of (256, 256, 512, 512, 512, 256, 128, 64) nodes, and every node in a layer is fully connected to every node in their adjacent layers.

CNN

. The CNN for classification is usually composed of two parts. The first part is composed of convolution operations that extract features from input data. The second part takes the features extracted by the convolution layer and feeds them into the MLP for classification. The convolution layer consists of a set of trainable filters. Each filter convolves across the width and height of input data, and generates convolution outputs. The outputs can be interpreted as a filtered input data. Therefore, optimizing the filter to extract relevant features from data is a crucial step for CNN. If one uses many filters, one can extract multiple features. These convolution processes generate a multi-dimensional feature map, which becomes the input for the MLP that combines all the processed features and finally predicts the classification of input data.

We prepared two different types of data encoding: 1-dimensional (1D) and 2-dimensional (2D) inputs. For 1D input, the feature map is generated by convolution only in the temporal space. The CNN for 1D input has five convolutional layers, with 100, 100, 200, 200 and 100 filters, respectively, and filter sizes of 10, 10, 10, 3 and 3, respectively. A CNN is specialized for 2D image recognition, so we reshaped the 1D temporal data of into 2D arrays of “images” such as , , or . Given a sequence , the reshaping of () changes the 1D sequence to , , . This 2D reshaping may be able to capture internal structures such as periodicity in the original time traces. For the 2D input, the CNN has also five convolutional layers of 100, 100, 200, 200 and 100 filters respectively, all size (3, 3). Both 1D and 2D CNNs have the same fully connected MLP parts with four hidden layers with (200, 100, 50, 50) nodes.

RNN. The RNN is designed to process sequential data. Arrays of input data are continuously fed into the RNN, and the activation of each node is transferred to directly connected nodes with recurrent flows. Thus the RNN can naturally consider the order of time traces. Here we use three types of RNNs: (i) vanilla RNN Sherstinsky2018

, (ii) long short-term memory (LSTM) 

Sherstinsky2018 ; Gers1999

and (iii) gated recurrent unit (GRU) 

Cho2014 . Vanilla RNN is the simplest RNN, and LSTM and GRU consider the memory effect of temporal data. Vanilla RNN has three layers with 250 input, 200 hidden, and 50 output nodes for recurrent flows. For the vanilla RNN, we did not use the 1D input of because it corresponds to a ShallowNet with a single hidden layer of 200 nodes. The LSTM and GRU have gated memory cells such as LSTM unit or GRU. As for the 2D CNN, we considered various input shapes to test memory effects in the LSTM and GRU networks.

Iv Results and discussions

For the learning, we used the Keras Python using the TensorFlow backend 

keras ; tensorflow . The learning of this simple task did not take much computation time, usually less than a few tens of minutes. We examined the diagnosis accuracy (Table  1), which was measured by the fraction of correct prediction of the degrees of insulin resistance among the tested glucose profiles. The overall accuracy ranged from 70% to 90%; MLP showed the best accuracy. The accuracy depended on the number of network parameters. The numbers of parameters (in millions) were approximately 1.99 (MLP), 0.99 (2D CNN), 0.87 (LSTM), 0.66 (GRU), and 0.26 (vanilla RNN).

In the analysis of the 2D reshaped data encoding, 2D CNN showed invariant results for different reshaping, whereas GRU and LSTM showed diminished accuracies as the segment length was decreased (Table  2). This trend can may be a results of the finite filter size of CNN and the memory effect of RNN. We have also examined a different segmentation rule that changes to , , , but it achieved a negligible increase (1-5%) in accuracy.

Real glucose profiles include intrinsic and measurement noise. Thus, to examine the noise effect, we added white noise into our synthesized glucose data, and confirmed that our diagnosis was robust up to about

fluctuation in BGL.

In this study, we checked whether machine learning could detect the patterns of BGL under insulin resistance. The temporal change of BGL results from the balanced response to the counter-regulatory hormones, insulin and glucagon. Thus the ineffective action of insulin, called insulin resistance, should affect the BGL profile. Therefore, we simulated the glucose profiles under insulin resistance by using a biophysical model for the glucose regulation, and confirmed that the subtle change of glucose profiles under insulin resistance could be recognized by various machine-learning methods. This demonstrates a great potential of the machine learning approach for the diagnosis of early stage diabetes

A continuous-glucose-monitoring (CGM) system has been widely used for mainly T1DM as a closed-loop artificial pancreas with insulin pumps  CGM . The recent development of low-cost CGM has revolutionized CGM usage towards wearable minimally-invasive CGM sensors CGM_DM ; CGM_AF . Such an effort will be in conjunction with our proposal to bring the future direction of diabetes management. In addition to the difficulty of obtaining high-resolution glucose profiles, the high-accuracy of labels is another prerequisite for successful supervised learning.

Acknowledgements.
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education, NRF-2019R1F1A1052916 (J.J.), and funded by the Ministry of Science, ICT Future Planning through NRF-2017R1D1A1B03034600 (T.S.).

References

  • (1) P. V. Röder, B. Wu, Y. Liu, and W. Han, Exp Mol Med. 48(3): e219 (2016).
  • (2) L. von Bertalanffy, Science 111 23–9 (1950).
  • (3) D. A. Lang, D. R. Matthews, J. Peto and R. C. Turner, New Engl. J. Med. 301 1023–7 (1979).
  • (4) American Diabetes Association, Diabetes Care 41(Suppl. 1), S13 (2018).
  • (5) American Diabetes Association, Diabetes Care 42(Suppl. 1), S13–S28 (2019).
  • (6) J. C. Seidell, Br. J. Nutr., 83, Suppl. 1, S5-S8 (2000).
  • (7) J. C. Seidell, Horm. Metab, Res., 21. 155-158 (1998).
  • (8) G. M. Reaven, Physiol. Rev. 75(3):473-86 (1995).
  • (9) I. Kavakiotis, O. Tsave, A. Salifoglou, N. Maglaveras, I. Vlahavas, and I. Chouvarda, Comput. Struct. Biotechnol. J., 15, 104-116 (2017)
  • (10) S. Larabi-Marie-Sainte, L. Aburahmah, R. Almohaini, and T. Saba, Appl. Sci. 9(21), 4604 (2019).
  • (11) D. Park, T. Song, D. Hoang, et al. Sci. Rep. 7, 1602 (2017).
  • (12) T. Song and J. Jo, Phys. Biol. 16, 051001 (2019).
  • (13) G. M. Grodsky, J. of Clin. Invest. 51. 2047 (1972).
  • (14) M. Komatsu, M. Takei, H. Ishii, and Y. Sato, J. Diabetes. Invest. 4. 511 (2013).
  • (15) I. J. Stamper and X. Wang, J. Theor. Biol. 318, 210 (2013)
  • (16) P. Palumbo, S. Ditlevsen, A. Bertuzzi, and A. D. Gaetano, Math. Biolsci. 244, 69 (2013).
  • (17) I. M. Tolic, E. Mosekilde, and J. Sturisa, J. theor. Biol. 207, 361 (2000).
  • (18) J. Li, Y. Kuang, C. C. Mason, J. Theor. Biol. 242, 722 (2006)
  • (19) T. Song, H. Kim. S.-W. Son, and J. Jo, Phys. Rev. E 101, 022613 (2020).
  • (20) B. Lee, T. Song, K. Lee, J. Kim, P.-O. Berggren, S. H. Ryu, et al., PLoS ONE 12(8): e0183569 (2017).
  • (21) B. Lee, T. Song, K. Lee, J. Kim, S. Han, P.-O. Berggren, et al., PLoS ONE 12 (2): e0172901, (2017).
  • (22) We set amplitude modulation functions , , , and phase modulation functions , . Here, we used the parameters: , , , and .
  • (23) P. Esling, and C. Agon, ACM Comput. Surv., 45 (1) 1-34 (2012).
  • (24) A. Bevilacqua, K. MacDonald, A. Rangarej, V. Widjaya, B. Caulfield, and T. Kechadi, Human Activity Recognition with Convolutional Neural Networks. In: Brefeld U. et al. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2018. Lecture Notes in Computer Science, vol 11053. Springer, Cham (2019).
  • (25) A. Craik, Y. He, and J. L. Contreras-Vidal, J. Neural Eng., 16(3), 031001 (2019).
  • (26) G. Hinton, et al.IEEE Signal Process. Mag. 29 (6) 82-97 (2012).
  • (27) A. Mohamed, G. Hinton and G. Penn, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, pp. 4273-4276 (2012).
  • (28) A. Krizhevsky, I. Sutskever, and G. Hinton, Advances in neural information processing systems, 1097-1105 (NIPS 2012). DATA MIN KNOWL DISC
  • (29) H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, Data. Min. Knowl. Disc., 33 917-963 (2019).
  • (30) A. Sherstinsky, arXiv:1808.03314 (2018).
  • (31) We used a tangent hyperbolic activation function for the LSTM given input shape , since its long segment dimension causes weight exploding when we used ReLU.
  • (32) D. P. Kingma, and B. Jimmy arXiv:1412.6980 (2014).
  • (33) K. Cho et al. arXiv:1406.1078 (2014).
  • (34) F. A. Gers, J. Schmidhuber, and F. Cummins., Ninth International Conference on Artificial Neural Networks ICANN 99. (Conf. Publ. No. 470) (1999).
  • (35) Chollet, François and others, Keras, 2015. Software available from https://keras.io
  • (36) M. Abadi et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from https://www.tensorflow.org/.
  • (37) Z. Mian, K. L. Hermayer, and A. Jenkins, Am. J. Med. Sci., 358, 332-339 (2019).
  • (38) M. Vettoretti, G. Cappon, G. Acciaroli, A. Facchinetti, and G. Sparacino, J. Diabetes Sci. Technol. 12, 1064–1071 (2018).
  • (39) G. Cappon, M. Vettoretti, G. Sparacino, A. Facchinetti, Diabetes Metab. J., 43, 383-397 (2019).