Compositional Data Regression in Insurance with Exponential Family PCA

12/29/2021
by   Guojun Gan, et al.
University of Connecticut
0

Compositional data are multivariate observations that carry only relative information between components. Applying standard multivariate statistical methodology directly to analyze compositional data can lead to paradoxes and misinterpretations. Compositional data also frequently appear in insurance, especially with telematics information. However, such type of data does not receive deserved special treatment in most existing actuarial literature. In this paper, we explore and investigate the use of exponential family principal component analysis (EPCA) to analyze compositional data in insurance. The method is applied to analyze a dataset obtained from the U.S. Mine Safety and Health Administration. The numerical results show that EPCA is able to produce principal components that are significant predictors and improve the prediction accuracy of the regression model. The EPCA method can be a promising useful tool for actuaries to analyze compositional data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/01/2020

Independent Component Analysis for Compositional Data

Compositional data represent a specific family of multivariate data, whe...
12/21/2018

Primal path algorithm for compositional data analysis

Compositional data have two unique characteristics compared to typical m...
04/11/2019

Robust Principal Component Analysis for Compositional Tables

A data table which is arranged according to two factors can often be con...
08/16/2021

Flexible Principal Component Analysis for Exponential Family Distributions

Traditional principal component analysis (PCA) is well known in high-dim...
06/21/2021

A causal view on compositional data

Many scientific datasets are compositional in nature. Important examples...
10/24/2021

Compositional data analysis – linear algebra, visualization and interpretation

Compositional data analysis is concerned with multivariate data that hav...
01/20/2022

A Guideline for the Statistical Analysis of Compositional Data in Immunology

The study of immune cellular composition is of great scientific interest...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Compositional data refer to data that describe parts of some whole and are commonly presented as vectors of proportions, percentages, concentrations, or frequencies. The sums of these vectors are constrained to be some fixed constant, e.g. 100%. Due to such constraints, compositional data possess the following properties 

(Pawlowsky-Glahn et al., 2015):

Scale invariance. They carry relative rather than absolute information. In other words, the information does not depend on the particular units used to express the compositional data.

Permutation invariance. Permutation of components of a compositional vector does not change the information contained in the compositional vector.

Subcomposition coherence. Information conveyed by a compositional vector of components should not contradict that coming from a subcomposition containing components of the original components.

These properties are important to understand the consequences that can make the application of conventional statistical methods restraint and invalid for compositional data.

Consider the case that such data are multivariate observations that carry relative information between components. Applying standard statistical methodology designed for unconstrained data directly to analyze such compositional features can affect inference leading to paradoxes and misinterpretations. Table 1 shows an example of Simpson’s paradox (Pawlowsky-Glahn et al., 2015). Table 1(a)

shows the number of students in two classrooms who arrive on time and are late, classified by gender. Table

1(b) shows the corresponding proportions of students arriving on time or being late. Table 1(c) shows the aggregated number of students from the two classrooms who arrive on time and are late. The corresponding proportions are shown in Table 1(d). From Table 1(b), we see that the proportion of female students arriving on time is greater than that of male students in both classrooms. However, Table 1(d) shows the opposite result: the proportion of male students arriving on time is greater than that of female students. The Simpson’s paradox tells us that we need to be careful about inferring from compositional data. The conclusions made from a population may not be true for subpopulations and vice versa.

Classroom Gender On Time Late
1 M 53 9
1 F 20 2
2 M 12 6
2 F 50 18
(a) Number of students arriving on time and being late, classified by classroom and gender.
Classroom Gender On Time Late
1 M 0.855 0.145
1 F 0.909 0.091
2 M 0.667 0.333
2 F 0.735 0.265
(b) Proportion of students arriving on time or being late, classified by classroom and gender.
Gender On Time Late
M 65 15
F 70 20
(c) Number of students arriving on time or being late, classified by gender.
Gender On Time Late
M 0.813 0.188
F 0.778 0.222
(d) Proportion of students arriving on time or being late, classified by gender.
Table 1: An example of Simpson’s paradox.

Our motivation for studying compositional data regression in insurance comes from the facts that the recent explosion of telematics data analysis in insurance (Verbelen et al., 2018; Guillen et al., 2019; Pesantez-Narvaez et al., 2019; Denuit et al., 2019; Guillen et al., 2020; So et al., 2021) and that compositional data analysis has received little attention in the actuarial literature. In telematics data, we usually see total mileages traveled by an insured during the year and mileages traveled in different conditions such as at night, over speed, or in urban areas. The distances traveled in different conditions are examples of compositional data. Compositional data analysis has also been used to address “coherence problem for subpopulations” in forecasting mortality (Bergeron-Boucher et al., 2017).

The classical method for dealing with compositional data is the logratio transformation method, which preserves the aforementioned properties. As pointed out by Aitchison (2003), logratio transformations cannot be applied to compositional data with zeros. In insurance data (e.g., telematics data), components of compositional vectors with zeros can be quite common. This sparsity imposes additional challenges to analyze compositional data in insurance.

In this paper, we investigate regression modeling of compositional variables (i.e., covariates) by using the exponential family principal component analysis (EPCA) to learn low dimensional representations of compositional data. Such representations of the compositional predictors extend the classical PCA as well as its probabilistic extension via probabilisitc principal components analysis (PPCA); both of which are special cases of EPCA. See Tipping and Bishop (1999).

The remaining part of the paper is organized as follows. In Section 2, we review some literature related to compositional data analysis. In Section 3, we describe the mine dataset used in our study. In Section 4, we present three negative binomial regression models for fitting the data. In Section 5, we provide numerical results of the proposed regression models. In Section 6, we conclude the paper with some remarks.

2 Literature review

Compositional data analysis dates back to the end of the 19th century when Karl Pearson published a paper about spurious correlations (Pearson, 1897). Since then, the development of compositional data analysis has gone through four phases (Aitchison and Egozcue, 2005; Pawlowsky-Glahn et al., 2015).

The first phase started in 1897 when Karl Pearson introduced the concept of spurious correlation in his paper (Pearson, 1897) and ended around 1960. During this phase, standard multivariate statistical analysis was developed to analyze compositional data despite the fact that a compositional vector is subject to a constant-sum constraint. In particular, correlation analysis was used to analyze compositional vectors even though Pearson (1897) pointed out the pitfalls of interpreting correlations on proportions.

The second phase started around 1960 when the geologist Felix Chayes criticized the application of standard multivariate statistical analysis to compositional data (Chayes, 1960) and ended around 1980. Chayes (1960) criticized the interpretation of correlation between components of a geochemical composition. During this phase, the main goal of compositional data analysis was to distort standard multivariate statistical techniques to analyze compositional data (Aitchison and Egozcue, 2005).

The third phase started around 1980 and ended around 2000. In the early 1980s, the statistician John Aitchison realized that compositions provide only relative but not absolute information about values of components and that every statement about a composition can be stated in terms of ratios of components (Aitchison, 1981, 1982, 1983, 1984). During this phase, transformation techniques were developed to transform compositional data so that standard multivariate statistical analysis can be applied to the transformed data. In particular, a variety of logratio transformations were developed for compositional data analysis. The logratio transformation has the advantage that it provides a one-to-one mapping between compositional vectors, which stay in a constrained simplex space, and the associated logratio vectors, which stay in a unconstrained real space. In addition, any statement about compositions can be reformulated by logratios, and vice versa.

The fourth phase started around 2000 when researchers realized that the internal simplicial operation of perturbation, the external operation of powering, and the simplicial metric define a metric vector space and that compositional data can be analyzed within this space with its specific algebraic-geometric structure. Research during this phase is characterized by developing staying-in-the-simplex approaches to solve compositional problems. In a staying-in-the simplex approach, compositions are represented by orthonormal coordinates that live in a real Euclidean space. The book by Pawlowsky-Glahn et al. (2015) summarizes such approaches for compositional data analysis.

Researchers in the actuarial community have used compositional data in their studies, especially those related to telematics. In most existing actuarial literature, however, compositional data do not receive special treatment. For example, Guillen et al. (2019) and Denuit et al. (2019) selected some percentages of distances traveled in different conditions and treated them as normal explanatory variables. Pesantez-Narvaez et al. (2019)

treated compositional predictors as normal explanatory variables in their study of using XGBoost and logistic regression to predict motor insurance claims with telematics data.

Guillen et al. (2020) also treated compositional predictors as normal explanatory variables in their negative binomial regression models.

Verbelen et al. (2018) used compositional data in generalized additive models and proposed a conditioning and a projection approach to handle structural zeros in components of compositional predictors. This is one of the few studies in insurance that consider handling compositional data. However, they did not consider dimension reduction techniques for compositional data in their regression models.

3 Description of the data

We use a dataset from the U.S. Mine Safety and Health Administration (MSHA) from 2013 to 2016. The dataset was used in the Predictive Analytics exam administrated by the Society of Actuaries in December 2018 111The dataset is available at https://www.soa.org/globalassets/assets/files/edu/2018/2018-12-exam-pa-data-file.zip.. This dataset contains 53,746 observations described by 20 variables, including compositional variables. Table 2 shows the 20 variables and their descriptions. Among the 20 variables, 10 of them (i.e., the proportion of employee hours in different categories) are compositional components.

Variable Description
YEAR Calendar year of experience
US_STATE US state where mine is located
COMMODITY Class of commodity mined
PRIMARY Primary commodity mined
SEAM_HEIGHT Coal seam height in inches (coal mines only)
TYPE_OF_MINE Type of mine
MINE_STATUS Status of operation of mine
AVG_EMP_TOTAL Average number of employees
EMP_HRS_TOTAL Total number of employee hours
PCT_HRS_UNDERGROUND Proportion of employee hours in underground operations
PCT_HRS_SURFACE Proportion of employee hours at surface operations of underground mine
PCT_HRS_STRIP Proportion of employee hours at strip mine
PCT_HRS_AUGER Proportion of employee hours in auger mining
PCT_HRS_CULM_BANK Proportion of employee hours in culm bank operations
PCT_HRS_DREDGE Proportion of employee hours in dredge operations
PCT_HRS_OTHER_SURFACE Proportion of employee hours in other surface mining operations
PCT_HRS_SHOP_YARD Proportion of employee hours in independent shops and yards
PCT_HRS_MILL_PREP Proportion of employee hours in mills or prep plants
PCT_HRS_OFFICE Proportion of employee hours in offices
NUM_INJURIES Total number of accidents reported
Table 2: Description of the 20 variables of the mine dataset.

In our study, we ignore the following variables: YEAR, US_STATE, COMMODITY, PRIMARY, SEAM_HEIGHT, MINE_STATUS, and EMP_HRS_TOTAL. We do not use YEAR in our regression models as we do not study the temporal effect of the data. However, we use YEAR to split the data into a training set and a test set. The variables COMMODITY, PRIMARY, and SEAM_HEIGHT are related to the variable TYPE_OF_MINE, which we will use in our models. The variable EMP_HRS_TOTAL is highly correlated to the variable AVG_EMP_TOTAL with a correlation coefficient of 0.9942. We use only one of them in our models.

YEAR Number of observations TYPE_OF_MINE Number of observations
2013 13759 Mill 2578
2014 13604 Sand & gravel 25414
2015 13294 Surface 23091
2016 13089 Underground 2663
Table 3:

Summary of categorical variables.

Variable Min 1st Q Median Mean 3rd Q Max
AVG_EMP_TOTAL 1 3 5 17.8419 12 3115
PCT_HRS_UNDERGROUND 0 0 0 0.0345 0 1
PCT_HRS_SURFACE 0 0 0 0.0087 0 1
PCT_HRS_STRIP 0 0.3299 0.8887 0.6801 1 1
PCT_HRS_AUGER 0 0 0 0.0046 0 1
PCT_HRS_CULM_BANK 0 0 0 0.0046 0 1
PCT_HRS_DREDGE 0 0 0 0.045 0 1
PCT_HRS_OTHER_SURFACE 0 0 0 0.0007 0 1
PCT_HRS_SHOP_YARD 0 0 0 0.0036 0 1
PCT_HRS_MILL_PREP 0 0 0 0.106 0 1
PCT_HRS_OFFICE 0 0 0.0312 0.1122 0.1685 1
NUM_INJURIES 0 0 0 0.4705 0 86
Table 4: Summary of numerical variables.

Table 3 shows summary statistics of the categorical variable TYPE_OF_MINE. We also show the number of observations in different years. From the table, we see that there are approximately same number of observations from different years and that most mines are sand & gravel and surface mines.

Variable Percentage of zeros
PCT_HRS_UNDERGROUND 95.34%
PCT_HRS_SURFACE 95.99%
PCT_HRS_STRIP 18.42%
PCT_HRS_AUGER 99.49%
PCT_HRS_CULM_BANK 99.46%
PCT_HRS_DREDGE 94.61%
PCT_HRS_OTHER_SURFACE 99.91%
PCT_HRS_SHOP_YARD 99.59%
PCT_HRS_MILL_PREP 81.24%
PCT_HRS_OFFICE 36.93%
Table 5: Percentage of zeros of the compositional variables.

Table 4

shows some summary statistics of the numerical variables. From the table, we see that the average number of employee ranges from 1 to 3115. All compositional components contain zeros. In additional, most compositional components are skewed as the 3rd quantiles are zero. Table

5 shows the percentages of zeros of the compositional variables. From the table, we see that most values of many compositional variables are just zeros.

(a)
(b)
Figure 1: Compositional variables of the mine dataset. The left figure shows the barplots of the compositional components of 10 observations. The right figure shows the boxplots of the compositional covariates.

Figure 0(a) shows the proportions of employee hours in different categories from the first ten observations. Figure 0(b) shows the boxplots of the compositional variables. From the figures, we also see that the compositional variables have many zero values. Zeros in compositional data are irregular values. Pawlowsky-Glahn et al. (2015) discussed several strategies to handle such irregular values. One strategy is to replace zeros by small positive values. In the mine dataset, we can treat zeros as rounded zeros below a detection limit. In this paper, we replace zero components with , which corresponds to a detection limit of one hour among one million employee hours.

Figure 2:

A frequency distribution of the response variable.

Figure 3: A scatter plot between the average number of employees and the total number of injuries reported.

The response variable is the total number of injuries reported. Figure 2 shows a frequency distribution of the response variable. The summary statistics given in Table 4 and Figure 2 show that the response variable is also highly skewed with many zeros. Figure 3 shows a scatter plot between the variable AVG_EMP_TOTAL and the response variable NUM_INJURIES. From the figure, we see that the two variables have a positive relationship. In our models, we will use the variable AVG_EMP_TOTAL as the exposure.

4 Models

In this section, we present the models for the compositional data described in the previous section. In particular, we first present the logratio transformation in detail. Then we present a dimension reduction technique for compositional data. Finally, we present regression models with compositional covariates.

4.1 Transformation

The classical method for dealing with compositional data is the logratio transformation method, which preserves the aforementioned properties. Let be a compositional dataset with compositional vectors. These vectors are assumed to be contained in the simplex:

where is a constant, usually 1. The superscript in does not denote the dimensionality of the simplex as the dimensionality of the simplex is .

The logratio transformation method transforms the data from the simplex to the real Euclidean space (Aitchison, 1994)

. There are three types of logratio transformations: centered logratio transformation (CLR), additive logratio transformation (ALR), and isometric logratio transformation (ILR). The CLR transformation scales each compositional vector by its geometric mean; the ALR transformation applies log-ratio between a component and a reference component; and the ILR transformation uses an orthogonal coordinate system in the simplex.

Let be a column vector (i.e., a vector). The CLR transformation is defined as:

(1)

where is the geometric mean of and is a matrix. Here is a identity matrix and is a vector of ones, i.e.,

The ALR transformation is defined similarly. Suppose that the th component of compositional vectors is selected as the reference component. Then the ALR transformation is defined as:

(2)

where is the th component of and is a matrix given by

The ILR transformation involves an orthogonal coordinate system in the simplex and is defined as:

(3)

where is a matrix that satisfies the following conditions:

The matrix is called the contrast matrix (Pawlowsky-Glahn et al., 2015). The CLR and ILR transformations have the following relationship:

Further, we have

All three logratio transformation methods are equivalent up to linear transformations. The ALR transformation does not preserve distances. The CLR method preserves distances but can lead to singular covariance matrices. The IRL methods avoids these drawbacks. However, it is challenging to interpret the resulting coordinates.

These logratio transformation methods provide one-to-one mappings between the real Euclidean space and the simplex and allow transforming back to the simplex from the Euclidean space without losing of information. The reverse operation of the CLR transformation, for example, is given by:

(4)

where is the transformation of under the CLR method. The reverse operation of the ILR transformation is given by:

(5)

where is defined in Equation (4).

4.2 PCA for compositional data

In this subsection, we introduce some dimension reduction methods for compositional data. In particular, we introduce the traditional principal component analysis (PCA) and the exponential family PCA (EPCA) for compositional data in details. The EPCA methodology has recently been shown to transform compositional data to uncover the true structure (Avalos et al., 2018).

The EPCA method is a generalized PCA method for distributions from the exponential family that was proposed by Collins et al. (2002) based on the same ideas of generalized linear models. To describe the traditional PCA and the EPCA method, let us first introduce the -PCA method because both the traditional PCA and the EPCA methods can be derived from the -PCA method.

We consider a dataset consisting of numerical vectors, each of which consists of components. For notation purpose, we assume all vectors are column vectors and is a matrix.

Let be a convex and differentiable function. Then the -PCA aims to find matrices and with

that minimize the following loss function:

(6)

where is the th column of and denotes the Bregman divergence:

(7)

Here denotes the vector differential operator. A Bregman divergence can be thought of as a general distance. Table 6 gives two examples of convex functions and the corresponding Bregman divergences.

Table 6: Examples of some convex functions and the corresponding Bregman divergences.

For a distribution in the exponential family, the conditional probability of a value

given a parameter value has the following form:

(8)

where is a function of only, is the natural parameter of the distribution, and is a function of that ensures that is a density function. The negative log-likelihood function of the exponential family can be expressed through a Bregman divergence. To see this, let be a function defined by

where is the derivative of . From the above definition, we can show that . Hence

which shows that the negative log-likelihood can be written as a Bregman distance plus two terms that are constant with respect to . As a result, minimizing the negative log-likelihood function with respect to is equivalent to minimizing the Bregman divergence.

The EPCA method aims to find matrices and with that minimize the following loss function:

(9)

where

or

In other words, the EPCA method tries to find a lower dimensional subspace of parameter space to approximate the original data. The matrix contains basis vectors of and

is represented as a linear combination of the basis vectors. The traditional PCA is a special case of EPCA when the normal distribution is appropriate for the data. In this case,

.

To apply the traditional PCA to compositional data, we first need to transform them from the simplex into the real Euclidean space by using a logratio transformation method (e.g., the CLR method). When the compositional data is transformed by the CLR method, the traditional PCA has the following loss function:

(10)

where is the CLR transformed data.

Avalos et al. (2018) proposed the following EPCA method for compositional data:

(11)

where is the Bregman divergence corresponding to the function. Avalos et al. (2018) also proposed an optimization procedure to find and such that the loss function is minimized.

4.3 Regression with compositional covariates

Compositional data in regression models refer to a set of predictor variables that describe parts of some whole and are commonly presented as vectors of proportions, percentages, concentrations, or frequencies.

Pawlowsky-Glahn et al. (2015)

presented a linear regression model with compositional covariates. The linear regression model is formulated as follows:

(12)

where is a real intercept and is the Aitchison inner product defined by

Here denotes the geometric mean. The sum of squared errors is given by:

(13)

which suggests that the actual fitting can be done using the IRL transformed coordinates, that is, a linear regression can be simply fitted to the response as a linear function of . The CLR transformation should not be used because it requires the generalized inversion of singular matrices.

Since the response variable of the mine dataset is a count variable, it is not suitable to use linear regression models. Instead, we will use generalized linear models to model the count variable. Common generalized linear models for count data include Poisson regression models and negative binomial models (Frees, 2009)

. Poisson models are simple but restrictive as they assume that the mean and the variance of the response are equal. For the mine dataset, the mean and the variance of the response variable

NUM_INJURIES are 0.4705 and 5.4636, respectively. In this study, we use negative binomial models as they provide more flexibility than Poisson models.

Let , , , denote the predictors from observations. The predictors can be selected in different ways. For example, the predictors can be the IRL transformed data or the principle components produced by a dimension reduction method such as the traditional PCA and the EPCA methods. For , let be the response corresponding to

. In a negative binomial regression model, we assume that the response variable follows a negative binomial distribution:

(14)

where and are parameters. The mean and the variance of a variable are given by

To incorporate covariates in a negative binomial regression model, we let the parameter to vary by observation. In particular, we incorporate the covariates as follows:

(15)

where and

are parameters to be estimated, and

is the exposure. For the mine dataset, we use the variable AVG_EMP_TOTAL as the exposure. This is similar to model the number of claims in a Property & Casualty dataset where the time to maturity of an auto insurance policy is used as the exposure (Frees, 2009; Gan and Valdez, 2018). The method of maximum likelihood can be used to estimate the parameters.

Model Covariates
NB TYPE_OF_MINE and the IRL transformed compositional components
NBPCA TYPE_OF_MINE and principle components from the traditional PCA method
NBEPCA TYPE_OF_MINE and principle components from the EPCA method
Table 7: Description of negative binomial regression models for the mine dataset.

In this study, we compare three negative binomial models. Table 7 describes the covariates of the three models. In the first model, we use the compositional components transformed by the IRL method that is defined in Equation (3). In the second model, we use the first few principle components obtained from the traditional PCA method that is defined in Equation (10). In the third model, we use the first few principle components obtained from the EPCA method that is defined in Equation (11).

5 Results

In this section, we present the results of fitting the three negative binomial regression models (see Table 7) to the mine dataset. We follow the following procedure to fit and test the models:

  1. Transform the compositional variables for the whole mine dataset. For the model NB, we transform the compositional variables with the ILR method. For the model NBPCA, we first transform the compositional variables with the CLR method and then apply the standard PCA to the transformed data. Both the ILR method and the CLR method are available in the R package compositions. For the model NBEPCA, we transform the compositional variable with the EPCA method implemented by Avalos et al. (2018) 222The R code of the EPCA method is available at https://github.com/sistm/CoDa-PCA..

  2. Split the transformed data into a training set and a test set. In particular, we use data from years 2013 to 2015 as the training data and use data from 2016 as the test data.

  3. Estimate parameters based on the training set.

  4. Make predictions for the test set and calculate the out-of-sample validation measures.

5.1 Validation measure

To measure the accuracy of regression models for count variables, it is common to use the Pearson’s chi-square statistic, which is defined as:

(16)

where is the observed number of mines that have injuries, is the predicted number of mines that have injuries, is the maximum number of injuries considered over all mines. Between two models, the model with a lower chi-square statistic is better.

The observed number of mines that have injuries is obtained by

where is an indicator function. The predicted number of mines that have injuries is calculated as follows:

where and are estimated values of the parameters.

From Figure 2, we see that the maximum observed number of injuries is 86. To calculate the validation measure, we use , that is, we use the first 100 terms of s and s. The remaining terms are too close to zero and will be ignored.

5.2 Results

We fitted the three models described in Table 7 to the mine dataset according to the aforementioned procedure. Table 8 shows the in-sample and out-of-sample chi-square statistics produced by the three models. From the table, we see that the negative binomial model with EPCA transformed data performs the best among the three models. The model NBEPCA produced the lowest chi-square statistics for both the training data and the test data.

Model In-sample Out-of-sample
NB 334.9093 113.1498
NBPCA 338.4336 114.2693
NBEPCA 303.9178 111.8998
Table 8: In-sample and out-of-sample validation measures produced by the three models.
(a) NB in-sample
(b) NB out-of-sample
(c) NBPCA in-sample
(d) NBPCA out-of-sample
(e) NBEPCA in-sample
(f) NBEPCA out-of-sample
Figure 4: Scatter plots of the observed number of mines with different number of injuries and that predicted by models at log scales.

Figure 4 shows the scatter plots of the observed number of mines with different number of injuries and that predicted by models at log scales. That is, the scatter plots show the relationship between and for . We use log scale here because the s and s have big differences for different s. From the scatter plots, we see that all the models predict the number of mines with different number of injuries quite well. We cannot tell the differences of the three models from the scatter plots. However, the chi-square statistics shown in Table 8 can tell that the NBEPCA model is the best.

Figure 5: Proportion of variation explained with different principle components.

For the NBPCA model, we selected the first five principal components produced by applying the standard PCA to the CLR transformed data. Figure 5 shows the proportion of variation explained by different number of principal components. The first five principal components can explain more than 95% of the variation of the original data. For the NBEPCA model, we also selected the first five principal components. The EPCA method used to transform the compositional variables is slow and took a normal desktop about 20 minutes to finish the computation. The reason is that the EPCA method we used is implemented in R, which is a scripting language and is not efficient for iterative optimization.

Predictor Coefficient Std. Error z value P value
(Intercept) -3.7525 0.0505 -74.3196 0
Sand & gravel -0.2356 0.0795 -2.9651 0.003
Surface 0.0444 0.0689 0.6446 0.5192
Underground -0.7147 0.1614 -4.4288 0
V1 -0.0536 0.0111 -4.8275 0
V2 -0.053 0.0061 -8.6443 0
V3 -0.0248 0.0127 -1.9471 0.0515
V4 -0.0674 0.0168 -4.0039 0.0001
V5 0.0003 0.007 0.0437 0.9652
V6 -0.044 0.0224 -1.9631 0.0496
V7 -0.018 0.013 -1.3886 0.1649
V8 0.0143 0.0025 5.6681 0
V9 -0.0057 0.0028 -2.0133 0.0441
(a)
Predictor Coefficient
PCT_HRS_UNDERGROUND 0.1093
PCT_HRS_SURFACE 0.1013
PCT_HRS_STRIP 0.0986
PCT_HRS_AUGER 0.1001
PCT_HRS_CULM_BANK 0.0949
PCT_HRS_DREDGE 0.1008
PCT_HRS_OTHER_SURFACE 0.0961
PCT_HRS_SHOP_YARD 0.0982
PCT_HRS_MILL_PREP 0.1013
PCT_HRS_OFFICE 0.0994
(b)
Table 9: Estimated regression coefficients of the model NB. Table (a) shows the coefficients of ILR transformed data. Table (b) shows the coefficients transformed back to the CLR proportions.
Predictor Coefficient Std. Error z value P value
(Intercept) -3.8635 0.0655 -58.9829 0
Sand & gravel -0.2261 0.0789 -2.8656 0.0042
Surface 0.0588 0.0681 0.8631 0.3881
Underground -0.3017 0.1319 -2.2876 0.0222
PC1 0.0274 0.0033 8.3922 0
PC2 0.0245 0.0035 7.0887 0
PC3 -0.0007 0.0031 -0.2257 0.8214
PC4 -0.0356 0.0071 -4.9989 0
PC5 -0.0514 0.0096 -5.3658 0
(a) NBPCA
Predictor Coefficient Std. Error z value P value
(Intercept) -3.2533 0.1209 -26.9052 0
Sand & gravel -0.348 0.0719 -4.8394 0
Surface -0.1203 0.0618 -1.9444 0.0518
Underground -0.1894 0.1153 -1.6428 0.1004
PC1 0.156 0.0184 8.479 0
PC2 0.0943 0.0073 12.8948 0
PC3 -0.0414 0.0124 -3.329 0.0009
PC4 0.0314 0.0152 2.0676 0.0387
PC5 -0.2678 0.027 -9.9187 0
(b) NBEPCA
Table 10: Estimated regression coefficients of the models NBPCA and NBEPCA.

Table 9 shows the estimated regression coefficients of the model NB. As shown in Table 9(a), the ten compositional variables are transformed to nine variables by the ILR method. Table 9(b) shows the regression coefficients transformed back to the CLR proportions by the inverse operation of the ILR method. From Table 9(b), we see that the coefficients of the ten compositional variables are similar and are quite different from the coefficients of the ILR variables to . From the values shown in Table 9(a), we see that some of the IRL transformed variables are not significant. For example, the variable V5 has a value of 0.9652 and the variable V7 has a value of 0.1649. Both variables have a value greater than 0.1. This suggests that it is appropriate to reduce the dimensionality of the compositional variables.

The regression coefficients of the ILR transformed variables to shown in Table 9 vary a lot. It is challenging to interpret those coefficients. From the values shown in Table 9(a), we see that some of the IRL transformed variables are not significant. For example, the variable V5 has a value of 0.9652 and the variable V7 has a value of 0.1649. Both variables have a value greater than 0.1. This suggests that it is appropriate to reduce the dimensionality of the compositional variables.

Table 10 shows the estimated regression coefficients of the models NBPCA and NBEPCA. We used five principal components in both models. From Table 10(b), we see that all principal components produced by the EPCA method are significant. However, Table 10(a) shows that the third principal component PC3 produced by the traditional PCA method is not significant as it has a value of 0.8214. This again shows the EPCA method is better than the traditional PCA method for the mine dataset.

In summary, our numerical results show that the EPCA method is able to produce better principal components than the traditional PCA method and that using the principal components produced by the EPCA method can improve the prediction accuracy of the negative binomial models with compositional covariates.

6 Conclusions

Compositional data are commonly presented as vectors of proportions, percentages, concentrations, or frequencies. A peculiarity of these vectors is that their sum is constrained to be some fixed constant, e.g. 100%. Due to such constraints, compositional data have the following properties: they carry only relative information (scale invariance); equivalent results should be yielded when the ordering of the parts in the composition is changed (permutation invariance); and results should not change if a noninformative part is removed (subcomposition coherence).

In this paper, we investigated regression models with compositional covariates by using a mine dataset. We built negative binomial regression models to predict the number of injuries from different mines. In particular, we built three negative binomial regression models: a model with ILR transformed compositional variables, a model with principal components produced by the traditional PCA, and a model with principal components produced by the exponential family PCA. Our numerical results show that the exponential family PCA is able to produce principal components that are significant predictors and improve the prediction accuracy of the regression model.

Acknowledgments

Guojun Gan and Emiliano A. Valdez would like to acknowledge the financial support provided by the Committee on Knowledge Extension Research of the Casualty Actuarial Society (CAS).

References

  • J. Aitchison and J. J. Egozcue (2005) Compositional data analysis: where are we and where should we be heading?. Mathematical Geology 37 (7), pp. 829–850. External Links: Link, Document Cited by: §2, §2.
  • J. Aitchison (1981) A new approach to null correlations of proportions. Journal of the International Association for Mathematical Geology 13 (2), pp. 175–189. External Links: Link, Document Cited by: §2.
  • J. Aitchison (1982) The statistical analysis of compositional data. Journal of the Royal Statistical Society: Series B (Methodological) 44 (2), pp. 139–160. External Links: Link, Document Cited by: §2.
  • J. Aitchison (1983) Principal component analysis of compositional data. Biometrika 70 (1), pp. 57–65. External Links: Link, Document Cited by: §2.
  • J. Aitchison (2003) The statistical analysis of compositional data. Blackburn Press, Caldwell, NJ. External Links: ISBN 9781930665781 Cited by: §1.
  • J. Aitchison (1984) The statistical analysis of geochemical compositions. Journal of the International Association for Mathematical Geology 16 (6), pp. 531–564. External Links: Link, Document Cited by: §2.
  • J. Aitchison (1994) Principles of compositional data analysis. Multivariate Analysis and Its Applications 24, pp. 73–81. External Links: Document, Link Cited by: §4.1.
  • M. Avalos, R. Nock, C. S. Ong, J. Rouar, and K. Sun (2018) Representation learning of compositional data. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31, pp. 6679–6689. External Links: Link Cited by: §4.2, §4.2, item 1.
  • M. Bergeron-Boucher, V. Canudas-Romo, J. Oeppen, and J. W. Vaupel (2017) Coherent forecasts of mortality with compositional data analysis. Demographic Research 37, pp. 527–566. External Links: Link, Document Cited by: §1.
  • F. Chayes (1960) On correlation between variables of constant sum. Journal of Geophysical Research 65 (12), pp. 4185–4193. External Links: Link, Document Cited by: §2.
  • M. Collins, S. Dasgupta, and R. E. Schapire (2002) A generalization of principal components analysis to the exponential family. In Advances in Neural Information Processing Systems, T. Dietterich, S. Becker, and Z. Ghahramani (Eds.), Vol. 14. External Links: Link Cited by: §4.2.
  • M. Denuit, M. Guillen, and J. Trufin (2019) Multivariate credibility modelling for usage-based motor insurance pricing with behavioural data. Annals of Actuarial Science 13 (2), pp. 378–399. External Links: Link, Document Cited by: §1, §2.
  • E. W. Frees (2009) Regression modeling with actuarial and financial applications. Cambridge University Press, Cambridge, UK. Cited by: §4.3, §4.3.
  • G. Gan and E. Valdez (2018) Actuarial statistics with r: theory and case studies. ACTEX Learning. External Links: Link Cited by: §4.3.
  • M. Guillen, J. P. Nielsen, M. Ayuso, and A. M. Pérez-Marín (2019) The use of telematics devices to improve automobile insurance rates. Risk Analysis 39 (3), pp. 662–672. External Links: Link, Document Cited by: §1, §2.
  • M. Guillen, J. P. Nielsen, A. M. Pérez-Marín, and V. Elpidorou (2020) Can automobile insurance telematics predict the risk of near-miss events?. North American Actuarial Journal 24 (1), pp. 141–152. External Links: Link, Document Cited by: §1, §2.
  • V. Pawlowsky-Glahn, J. J. Egozcue, and R. Tolosana-Delgado (2015) Modelling and analysis of compositional data. John Wiley & Sons, Hoboken, NJ. External Links: ISBN 9781118443064 Cited by: §1, §1, §2, §2, §3, §4.1, §4.3.
  • K. Pearson (1897) Mathematical contributions to the theory of evolution. on a form of spurious correlation which may arise when indices are used in the measurement of organs. Proceedings of the Royal Society of London 60 (359-367), pp. 489–498. External Links: Link, Document Cited by: §2, §2.
  • J. Pesantez-Narvaez, M. Guillen, and M. Alcañiz (2019) Predicting motor insurance claims using telematics data—xgboost versus logistic regression. Risks 7 (2), pp. 70. External Links: Link, Document Cited by: §1, §2.
  • B. So, J. Boucher, and E. A. Valdez (2021) Cost-sensitive multi-class AdaBoost for understanding driving behavior based on telematics. ASTIN Bulletin: The Journal of the International Actuarial Association 51 (3), pp. 719–751. External Links: Document Cited by: §1.
  • M. E. Tipping and C. M. Bishop (1999) Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61 (3), pp. 611–622. External Links: Link Cited by: §1.
  • R. Verbelen, K. Antonio, and G. Claeskens (2018) Unravelling the predictive power of telematics data in car insurance pricing. Journal of the Royal Statistical Society: Series C (Applied Statistics) 67 (5), pp. 1275–1304. External Links: Link, Document Cited by: §1, §2.