1 Introduction
Compositional data refer to data that describe parts of some whole and are commonly presented as vectors of proportions, percentages, concentrations, or frequencies. The sums of these vectors are constrained to be some fixed constant, e.g. 100%. Due to such constraints, compositional data possess the following properties
(PawlowskyGlahn et al., 2015):
Scale invariance. They carry relative rather than absolute information. In other words, the information does not depend on the particular units used to express the compositional data.

Permutation invariance. Permutation of components of a compositional vector does not change the information contained in the compositional vector.

Subcomposition coherence. Information conveyed by a compositional vector of components should not contradict that coming from a subcomposition containing components of the original components.
These properties are important to understand the consequences that can make the application of conventional statistical methods restraint and invalid for compositional data.
Consider the case that such data are multivariate observations that carry relative information between components. Applying standard statistical methodology designed for unconstrained data directly to analyze such compositional features can affect inference leading to paradoxes and misinterpretations. Table 1 shows an example of Simpson’s paradox (PawlowskyGlahn et al., 2015). Table 1(a)
shows the number of students in two classrooms who arrive on time and are late, classified by gender. Table
1(b) shows the corresponding proportions of students arriving on time or being late. Table 1(c) shows the aggregated number of students from the two classrooms who arrive on time and are late. The corresponding proportions are shown in Table 1(d). From Table 1(b), we see that the proportion of female students arriving on time is greater than that of male students in both classrooms. However, Table 1(d) shows the opposite result: the proportion of male students arriving on time is greater than that of female students. The Simpson’s paradox tells us that we need to be careful about inferring from compositional data. The conclusions made from a population may not be true for subpopulations and vice versa.




Our motivation for studying compositional data regression in insurance comes from the facts that the recent explosion of telematics data analysis in insurance (Verbelen et al., 2018; Guillen et al., 2019; PesantezNarvaez et al., 2019; Denuit et al., 2019; Guillen et al., 2020; So et al., 2021) and that compositional data analysis has received little attention in the actuarial literature. In telematics data, we usually see total mileages traveled by an insured during the year and mileages traveled in different conditions such as at night, over speed, or in urban areas. The distances traveled in different conditions are examples of compositional data. Compositional data analysis has also been used to address “coherence problem for subpopulations” in forecasting mortality (BergeronBoucher et al., 2017).
The classical method for dealing with compositional data is the logratio transformation method, which preserves the aforementioned properties. As pointed out by Aitchison (2003), logratio transformations cannot be applied to compositional data with zeros. In insurance data (e.g., telematics data), components of compositional vectors with zeros can be quite common. This sparsity imposes additional challenges to analyze compositional data in insurance.
In this paper, we investigate regression modeling of compositional variables (i.e., covariates) by using the exponential family principal component analysis (EPCA) to learn low dimensional representations of compositional data. Such representations of the compositional predictors extend the classical PCA as well as its probabilistic extension via probabilisitc principal components analysis (PPCA); both of which are special cases of EPCA. See Tipping and Bishop (1999).
The remaining part of the paper is organized as follows. In Section 2, we review some literature related to compositional data analysis. In Section 3, we describe the mine dataset used in our study. In Section 4, we present three negative binomial regression models for fitting the data. In Section 5, we provide numerical results of the proposed regression models. In Section 6, we conclude the paper with some remarks.
2 Literature review
Compositional data analysis dates back to the end of the 19th century when Karl Pearson published a paper about spurious correlations (Pearson, 1897). Since then, the development of compositional data analysis has gone through four phases (Aitchison and Egozcue, 2005; PawlowskyGlahn et al., 2015).
The first phase started in 1897 when Karl Pearson introduced the concept of spurious correlation in his paper (Pearson, 1897) and ended around 1960. During this phase, standard multivariate statistical analysis was developed to analyze compositional data despite the fact that a compositional vector is subject to a constantsum constraint. In particular, correlation analysis was used to analyze compositional vectors even though Pearson (1897) pointed out the pitfalls of interpreting correlations on proportions.
The second phase started around 1960 when the geologist Felix Chayes criticized the application of standard multivariate statistical analysis to compositional data (Chayes, 1960) and ended around 1980. Chayes (1960) criticized the interpretation of correlation between components of a geochemical composition. During this phase, the main goal of compositional data analysis was to distort standard multivariate statistical techniques to analyze compositional data (Aitchison and Egozcue, 2005).
The third phase started around 1980 and ended around 2000. In the early 1980s, the statistician John Aitchison realized that compositions provide only relative but not absolute information about values of components and that every statement about a composition can be stated in terms of ratios of components (Aitchison, 1981, 1982, 1983, 1984). During this phase, transformation techniques were developed to transform compositional data so that standard multivariate statistical analysis can be applied to the transformed data. In particular, a variety of logratio transformations were developed for compositional data analysis. The logratio transformation has the advantage that it provides a onetoone mapping between compositional vectors, which stay in a constrained simplex space, and the associated logratio vectors, which stay in a unconstrained real space. In addition, any statement about compositions can be reformulated by logratios, and vice versa.
The fourth phase started around 2000 when researchers realized that the internal simplicial operation of perturbation, the external operation of powering, and the simplicial metric define a metric vector space and that compositional data can be analyzed within this space with its specific algebraicgeometric structure. Research during this phase is characterized by developing stayinginthesimplex approaches to solve compositional problems. In a stayinginthe simplex approach, compositions are represented by orthonormal coordinates that live in a real Euclidean space. The book by PawlowskyGlahn et al. (2015) summarizes such approaches for compositional data analysis.
Researchers in the actuarial community have used compositional data in their studies, especially those related to telematics. In most existing actuarial literature, however, compositional data do not receive special treatment. For example, Guillen et al. (2019) and Denuit et al. (2019) selected some percentages of distances traveled in different conditions and treated them as normal explanatory variables. PesantezNarvaez et al. (2019)
treated compositional predictors as normal explanatory variables in their study of using XGBoost and logistic regression to predict motor insurance claims with telematics data.
Guillen et al. (2020) also treated compositional predictors as normal explanatory variables in their negative binomial regression models.Verbelen et al. (2018) used compositional data in generalized additive models and proposed a conditioning and a projection approach to handle structural zeros in components of compositional predictors. This is one of the few studies in insurance that consider handling compositional data. However, they did not consider dimension reduction techniques for compositional data in their regression models.
3 Description of the data
We use a dataset from the U.S. Mine Safety and Health Administration (MSHA) from 2013 to 2016. The dataset was used in the Predictive Analytics exam administrated by the Society of Actuaries in December 2018 ^{1}^{1}1The dataset is available at https://www.soa.org/globalassets/assets/files/edu/2018/201812exampadatafile.zip.. This dataset contains 53,746 observations described by 20 variables, including compositional variables. Table 2 shows the 20 variables and their descriptions. Among the 20 variables, 10 of them (i.e., the proportion of employee hours in different categories) are compositional components.
Variable  Description 

YEAR  Calendar year of experience 
US_STATE  US state where mine is located 
COMMODITY  Class of commodity mined 
PRIMARY  Primary commodity mined 
SEAM_HEIGHT  Coal seam height in inches (coal mines only) 
TYPE_OF_MINE  Type of mine 
MINE_STATUS  Status of operation of mine 
AVG_EMP_TOTAL  Average number of employees 
EMP_HRS_TOTAL  Total number of employee hours 
PCT_HRS_UNDERGROUND  Proportion of employee hours in underground operations 
PCT_HRS_SURFACE  Proportion of employee hours at surface operations of underground mine 
PCT_HRS_STRIP  Proportion of employee hours at strip mine 
PCT_HRS_AUGER  Proportion of employee hours in auger mining 
PCT_HRS_CULM_BANK  Proportion of employee hours in culm bank operations 
PCT_HRS_DREDGE  Proportion of employee hours in dredge operations 
PCT_HRS_OTHER_SURFACE  Proportion of employee hours in other surface mining operations 
PCT_HRS_SHOP_YARD  Proportion of employee hours in independent shops and yards 
PCT_HRS_MILL_PREP  Proportion of employee hours in mills or prep plants 
PCT_HRS_OFFICE  Proportion of employee hours in offices 
NUM_INJURIES  Total number of accidents reported 
In our study, we ignore the following variables: YEAR, US_STATE, COMMODITY, PRIMARY, SEAM_HEIGHT, MINE_STATUS, and EMP_HRS_TOTAL. We do not use YEAR in our regression models as we do not study the temporal effect of the data. However, we use YEAR to split the data into a training set and a test set. The variables COMMODITY, PRIMARY, and SEAM_HEIGHT are related to the variable TYPE_OF_MINE, which we will use in our models. The variable EMP_HRS_TOTAL is highly correlated to the variable AVG_EMP_TOTAL with a correlation coefficient of 0.9942. We use only one of them in our models.
YEAR  Number of observations  TYPE_OF_MINE  Number of observations 

2013  13759  Mill  2578 
2014  13604  Sand & gravel  25414 
2015  13294  Surface  23091 
2016  13089  Underground  2663 
Summary of categorical variables.
Variable  Min  1st Q  Median  Mean  3rd Q  Max 

AVG_EMP_TOTAL  1  3  5  17.8419  12  3115 
PCT_HRS_UNDERGROUND  0  0  0  0.0345  0  1 
PCT_HRS_SURFACE  0  0  0  0.0087  0  1 
PCT_HRS_STRIP  0  0.3299  0.8887  0.6801  1  1 
PCT_HRS_AUGER  0  0  0  0.0046  0  1 
PCT_HRS_CULM_BANK  0  0  0  0.0046  0  1 
PCT_HRS_DREDGE  0  0  0  0.045  0  1 
PCT_HRS_OTHER_SURFACE  0  0  0  0.0007  0  1 
PCT_HRS_SHOP_YARD  0  0  0  0.0036  0  1 
PCT_HRS_MILL_PREP  0  0  0  0.106  0  1 
PCT_HRS_OFFICE  0  0  0.0312  0.1122  0.1685  1 
NUM_INJURIES  0  0  0  0.4705  0  86 
Table 3 shows summary statistics of the categorical variable TYPE_OF_MINE. We also show the number of observations in different years. From the table, we see that there are approximately same number of observations from different years and that most mines are sand & gravel and surface mines.
Variable  Percentage of zeros 

PCT_HRS_UNDERGROUND  95.34% 
PCT_HRS_SURFACE  95.99% 
PCT_HRS_STRIP  18.42% 
PCT_HRS_AUGER  99.49% 
PCT_HRS_CULM_BANK  99.46% 
PCT_HRS_DREDGE  94.61% 
PCT_HRS_OTHER_SURFACE  99.91% 
PCT_HRS_SHOP_YARD  99.59% 
PCT_HRS_MILL_PREP  81.24% 
PCT_HRS_OFFICE  36.93% 
Table 4
shows some summary statistics of the numerical variables. From the table, we see that the average number of employee ranges from 1 to 3115. All compositional components contain zeros. In additional, most compositional components are skewed as the 3rd quantiles are zero. Table
5 shows the percentages of zeros of the compositional variables. From the table, we see that most values of many compositional variables are just zeros.Figure 0(a) shows the proportions of employee hours in different categories from the first ten observations. Figure 0(b) shows the boxplots of the compositional variables. From the figures, we also see that the compositional variables have many zero values. Zeros in compositional data are irregular values. PawlowskyGlahn et al. (2015) discussed several strategies to handle such irregular values. One strategy is to replace zeros by small positive values. In the mine dataset, we can treat zeros as rounded zeros below a detection limit. In this paper, we replace zero components with , which corresponds to a detection limit of one hour among one million employee hours.
The response variable is the total number of injuries reported. Figure 2 shows a frequency distribution of the response variable. The summary statistics given in Table 4 and Figure 2 show that the response variable is also highly skewed with many zeros. Figure 3 shows a scatter plot between the variable AVG_EMP_TOTAL and the response variable NUM_INJURIES. From the figure, we see that the two variables have a positive relationship. In our models, we will use the variable AVG_EMP_TOTAL as the exposure.
4 Models
In this section, we present the models for the compositional data described in the previous section. In particular, we first present the logratio transformation in detail. Then we present a dimension reduction technique for compositional data. Finally, we present regression models with compositional covariates.
4.1 Transformation
The classical method for dealing with compositional data is the logratio transformation method, which preserves the aforementioned properties. Let be a compositional dataset with compositional vectors. These vectors are assumed to be contained in the simplex:
where is a constant, usually 1. The superscript in does not denote the dimensionality of the simplex as the dimensionality of the simplex is .
The logratio transformation method transforms the data from the simplex to the real Euclidean space (Aitchison, 1994)
. There are three types of logratio transformations: centered logratio transformation (CLR), additive logratio transformation (ALR), and isometric logratio transformation (ILR). The CLR transformation scales each compositional vector by its geometric mean; the ALR transformation applies logratio between a component and a reference component; and the ILR transformation uses an orthogonal coordinate system in the simplex.
Let be a column vector (i.e., a vector). The CLR transformation is defined as:
(1) 
where is the geometric mean of and is a matrix. Here is a identity matrix and is a vector of ones, i.e.,
The ALR transformation is defined similarly. Suppose that the th component of compositional vectors is selected as the reference component. Then the ALR transformation is defined as:
(2) 
where is the th component of and is a matrix given by
The ILR transformation involves an orthogonal coordinate system in the simplex and is defined as:
(3) 
where is a matrix that satisfies the following conditions:
The matrix is called the contrast matrix (PawlowskyGlahn et al., 2015). The CLR and ILR transformations have the following relationship:
Further, we have
All three logratio transformation methods are equivalent up to linear transformations. The ALR transformation does not preserve distances. The CLR method preserves distances but can lead to singular covariance matrices. The IRL methods avoids these drawbacks. However, it is challenging to interpret the resulting coordinates.
These logratio transformation methods provide onetoone mappings between the real Euclidean space and the simplex and allow transforming back to the simplex from the Euclidean space without losing of information. The reverse operation of the CLR transformation, for example, is given by:
(4) 
where is the transformation of under the CLR method. The reverse operation of the ILR transformation is given by:
(5) 
where is defined in Equation (4).
4.2 PCA for compositional data
In this subsection, we introduce some dimension reduction methods for compositional data. In particular, we introduce the traditional principal component analysis (PCA) and the exponential family PCA (EPCA) for compositional data in details. The EPCA methodology has recently been shown to transform compositional data to uncover the true structure (Avalos et al., 2018).
The EPCA method is a generalized PCA method for distributions from the exponential family that was proposed by Collins et al. (2002) based on the same ideas of generalized linear models. To describe the traditional PCA and the EPCA method, let us first introduce the PCA method because both the traditional PCA and the EPCA methods can be derived from the PCA method.
We consider a dataset consisting of numerical vectors, each of which consists of components. For notation purpose, we assume all vectors are column vectors and is a matrix.
Let be a convex and differentiable function. Then the PCA aims to find matrices and with
that minimize the following loss function:
(6) 
where is the th column of and denotes the Bregman divergence:
(7) 
Here denotes the vector differential operator. A Bregman divergence can be thought of as a general distance. Table 6 gives two examples of convex functions and the corresponding Bregman divergences.
For a distribution in the exponential family, the conditional probability of a value
given a parameter value has the following form:(8) 
where is a function of only, is the natural parameter of the distribution, and is a function of that ensures that is a density function. The negative loglikelihood function of the exponential family can be expressed through a Bregman divergence. To see this, let be a function defined by
where is the derivative of . From the above definition, we can show that . Hence
which shows that the negative loglikelihood can be written as a Bregman distance plus two terms that are constant with respect to . As a result, minimizing the negative loglikelihood function with respect to is equivalent to minimizing the Bregman divergence.
The EPCA method aims to find matrices and with that minimize the following loss function:
(9) 
where
or
In other words, the EPCA method tries to find a lower dimensional subspace of parameter space to approximate the original data. The matrix contains basis vectors of and
is represented as a linear combination of the basis vectors. The traditional PCA is a special case of EPCA when the normal distribution is appropriate for the data. In this case,
.To apply the traditional PCA to compositional data, we first need to transform them from the simplex into the real Euclidean space by using a logratio transformation method (e.g., the CLR method). When the compositional data is transformed by the CLR method, the traditional PCA has the following loss function:
(10) 
where is the CLR transformed data.
4.3 Regression with compositional covariates
Compositional data in regression models refer to a set of predictor variables that describe parts of some whole and are commonly presented as vectors of proportions, percentages, concentrations, or frequencies.
PawlowskyGlahn et al. (2015)
presented a linear regression model with compositional covariates. The linear regression model is formulated as follows:
(12) 
where is a real intercept and is the Aitchison inner product defined by
Here denotes the geometric mean. The sum of squared errors is given by:
(13) 
which suggests that the actual fitting can be done using the IRL transformed coordinates, that is, a linear regression can be simply fitted to the response as a linear function of . The CLR transformation should not be used because it requires the generalized inversion of singular matrices.
Since the response variable of the mine dataset is a count variable, it is not suitable to use linear regression models. Instead, we will use generalized linear models to model the count variable. Common generalized linear models for count data include Poisson regression models and negative binomial models (Frees, 2009)
. Poisson models are simple but restrictive as they assume that the mean and the variance of the response are equal. For the mine dataset, the mean and the variance of the response variable
NUM_INJURIES are 0.4705 and 5.4636, respectively. In this study, we use negative binomial models as they provide more flexibility than Poisson models.Let , , , denote the predictors from observations. The predictors can be selected in different ways. For example, the predictors can be the IRL transformed data or the principle components produced by a dimension reduction method such as the traditional PCA and the EPCA methods. For , let be the response corresponding to
. In a negative binomial regression model, we assume that the response variable follows a negative binomial distribution:
(14) 
where and are parameters. The mean and the variance of a variable are given by
To incorporate covariates in a negative binomial regression model, we let the parameter to vary by observation. In particular, we incorporate the covariates as follows:
(15) 
where and
are parameters to be estimated, and
is the exposure. For the mine dataset, we use the variable AVG_EMP_TOTAL as the exposure. This is similar to model the number of claims in a Property & Casualty dataset where the time to maturity of an auto insurance policy is used as the exposure (Frees, 2009; Gan and Valdez, 2018). The method of maximum likelihood can be used to estimate the parameters.Model  Covariates 

NB  TYPE_OF_MINE and the IRL transformed compositional components 
NBPCA  TYPE_OF_MINE and principle components from the traditional PCA method 
NBEPCA  TYPE_OF_MINE and principle components from the EPCA method 
In this study, we compare three negative binomial models. Table 7 describes the covariates of the three models. In the first model, we use the compositional components transformed by the IRL method that is defined in Equation (3). In the second model, we use the first few principle components obtained from the traditional PCA method that is defined in Equation (10). In the third model, we use the first few principle components obtained from the EPCA method that is defined in Equation (11).
5 Results
In this section, we present the results of fitting the three negative binomial regression models (see Table 7) to the mine dataset. We follow the following procedure to fit and test the models:

Transform the compositional variables for the whole mine dataset. For the model NB, we transform the compositional variables with the ILR method. For the model NBPCA, we first transform the compositional variables with the CLR method and then apply the standard PCA to the transformed data. Both the ILR method and the CLR method are available in the R package compositions. For the model NBEPCA, we transform the compositional variable with the EPCA method implemented by Avalos et al. (2018) ^{2}^{2}2The R code of the EPCA method is available at https://github.com/sistm/CoDaPCA..

Split the transformed data into a training set and a test set. In particular, we use data from years 2013 to 2015 as the training data and use data from 2016 as the test data.

Estimate parameters based on the training set.

Make predictions for the test set and calculate the outofsample validation measures.
5.1 Validation measure
To measure the accuracy of regression models for count variables, it is common to use the Pearson’s chisquare statistic, which is defined as:
(16) 
where is the observed number of mines that have injuries, is the predicted number of mines that have injuries, is the maximum number of injuries considered over all mines. Between two models, the model with a lower chisquare statistic is better.
The observed number of mines that have injuries is obtained by
where is an indicator function. The predicted number of mines that have injuries is calculated as follows:
where and are estimated values of the parameters.
From Figure 2, we see that the maximum observed number of injuries is 86. To calculate the validation measure, we use , that is, we use the first 100 terms of s and s. The remaining terms are too close to zero and will be ignored.
5.2 Results
We fitted the three models described in Table 7 to the mine dataset according to the aforementioned procedure. Table 8 shows the insample and outofsample chisquare statistics produced by the three models. From the table, we see that the negative binomial model with EPCA transformed data performs the best among the three models. The model NBEPCA produced the lowest chisquare statistics for both the training data and the test data.
Model  Insample  Outofsample 

NB  334.9093  113.1498 
NBPCA  338.4336  114.2693 
NBEPCA  303.9178  111.8998 
Figure 4 shows the scatter plots of the observed number of mines with different number of injuries and that predicted by models at log scales. That is, the scatter plots show the relationship between and for . We use log scale here because the s and s have big differences for different s. From the scatter plots, we see that all the models predict the number of mines with different number of injuries quite well. We cannot tell the differences of the three models from the scatter plots. However, the chisquare statistics shown in Table 8 can tell that the NBEPCA model is the best.
For the NBPCA model, we selected the first five principal components produced by applying the standard PCA to the CLR transformed data. Figure 5 shows the proportion of variation explained by different number of principal components. The first five principal components can explain more than 95% of the variation of the original data. For the NBEPCA model, we also selected the first five principal components. The EPCA method used to transform the compositional variables is slow and took a normal desktop about 20 minutes to finish the computation. The reason is that the EPCA method we used is implemented in R, which is a scripting language and is not efficient for iterative optimization.




Table 9 shows the estimated regression coefficients of the model NB. As shown in Table 9(a), the ten compositional variables are transformed to nine variables by the ILR method. Table 9(b) shows the regression coefficients transformed back to the CLR proportions by the inverse operation of the ILR method. From Table 9(b), we see that the coefficients of the ten compositional variables are similar and are quite different from the coefficients of the ILR variables to . From the values shown in Table 9(a), we see that some of the IRL transformed variables are not significant. For example, the variable V5 has a value of 0.9652 and the variable V7 has a value of 0.1649. Both variables have a value greater than 0.1. This suggests that it is appropriate to reduce the dimensionality of the compositional variables.
The regression coefficients of the ILR transformed variables to shown in Table 9 vary a lot. It is challenging to interpret those coefficients. From the values shown in Table 9(a), we see that some of the IRL transformed variables are not significant. For example, the variable V5 has a value of 0.9652 and the variable V7 has a value of 0.1649. Both variables have a value greater than 0.1. This suggests that it is appropriate to reduce the dimensionality of the compositional variables.
Table 10 shows the estimated regression coefficients of the models NBPCA and NBEPCA. We used five principal components in both models. From Table 10(b), we see that all principal components produced by the EPCA method are significant. However, Table 10(a) shows that the third principal component PC3 produced by the traditional PCA method is not significant as it has a value of 0.8214. This again shows the EPCA method is better than the traditional PCA method for the mine dataset.
In summary, our numerical results show that the EPCA method is able to produce better principal components than the traditional PCA method and that using the principal components produced by the EPCA method can improve the prediction accuracy of the negative binomial models with compositional covariates.
6 Conclusions
Compositional data are commonly presented as vectors of proportions, percentages, concentrations, or frequencies. A peculiarity of these vectors is that their sum is constrained to be some fixed constant, e.g. 100%. Due to such constraints, compositional data have the following properties: they carry only relative information (scale invariance); equivalent results should be yielded when the ordering of the parts in the composition is changed (permutation invariance); and results should not change if a noninformative part is removed (subcomposition coherence).
In this paper, we investigated regression models with compositional covariates by using a mine dataset. We built negative binomial regression models to predict the number of injuries from different mines. In particular, we built three negative binomial regression models: a model with ILR transformed compositional variables, a model with principal components produced by the traditional PCA, and a model with principal components produced by the exponential family PCA. Our numerical results show that the exponential family PCA is able to produce principal components that are significant predictors and improve the prediction accuracy of the regression model.
Acknowledgments
Guojun Gan and Emiliano A. Valdez would like to acknowledge the financial support provided by the Committee on Knowledge Extension Research of the Casualty Actuarial Society (CAS).
References
 Compositional data analysis: where are we and where should we be heading?. Mathematical Geology 37 (7), pp. 829–850. External Links: Link, Document Cited by: §2, §2.
 A new approach to null correlations of proportions. Journal of the International Association for Mathematical Geology 13 (2), pp. 175–189. External Links: Link, Document Cited by: §2.
 The statistical analysis of compositional data. Journal of the Royal Statistical Society: Series B (Methodological) 44 (2), pp. 139–160. External Links: Link, Document Cited by: §2.
 Principal component analysis of compositional data. Biometrika 70 (1), pp. 57–65. External Links: Link, Document Cited by: §2.
 The statistical analysis of compositional data. Blackburn Press, Caldwell, NJ. External Links: ISBN 9781930665781 Cited by: §1.
 The statistical analysis of geochemical compositions. Journal of the International Association for Mathematical Geology 16 (6), pp. 531–564. External Links: Link, Document Cited by: §2.
 Principles of compositional data analysis. Multivariate Analysis and Its Applications 24, pp. 73–81. External Links: Document, Link Cited by: §4.1.
 Representation learning of compositional data. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (Eds.), Vol. 31, pp. 6679–6689. External Links: Link Cited by: §4.2, §4.2, item 1.
 Coherent forecasts of mortality with compositional data analysis. Demographic Research 37, pp. 527–566. External Links: Link, Document Cited by: §1.
 On correlation between variables of constant sum. Journal of Geophysical Research 65 (12), pp. 4185–4193. External Links: Link, Document Cited by: §2.
 A generalization of principal components analysis to the exponential family. In Advances in Neural Information Processing Systems, T. Dietterich, S. Becker, and Z. Ghahramani (Eds.), Vol. 14. External Links: Link Cited by: §4.2.
 Multivariate credibility modelling for usagebased motor insurance pricing with behavioural data. Annals of Actuarial Science 13 (2), pp. 378–399. External Links: Link, Document Cited by: §1, §2.
 Regression modeling with actuarial and financial applications. Cambridge University Press, Cambridge, UK. Cited by: §4.3, §4.3.
 Actuarial statistics with r: theory and case studies. ACTEX Learning. External Links: Link Cited by: §4.3.
 The use of telematics devices to improve automobile insurance rates. Risk Analysis 39 (3), pp. 662–672. External Links: Link, Document Cited by: §1, §2.
 Can automobile insurance telematics predict the risk of nearmiss events?. North American Actuarial Journal 24 (1), pp. 141–152. External Links: Link, Document Cited by: §1, §2.
 Modelling and analysis of compositional data. John Wiley & Sons, Hoboken, NJ. External Links: ISBN 9781118443064 Cited by: §1, §1, §2, §2, §3, §4.1, §4.3.
 Mathematical contributions to the theory of evolution. on a form of spurious correlation which may arise when indices are used in the measurement of organs. Proceedings of the Royal Society of London 60 (359367), pp. 489–498. External Links: Link, Document Cited by: §2, §2.
 Predicting motor insurance claims using telematics data—xgboost versus logistic regression. Risks 7 (2), pp. 70. External Links: Link, Document Cited by: §1, §2.
 Costsensitive multiclass AdaBoost for understanding driving behavior based on telematics. ASTIN Bulletin: The Journal of the International Actuarial Association 51 (3), pp. 719–751. External Links: Document Cited by: §1.
 Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61 (3), pp. 611–622. External Links: Link Cited by: §1.
 Unravelling the predictive power of telematics data in car insurance pricing. Journal of the Royal Statistical Society: Series C (Applied Statistics) 67 (5), pp. 1275–1304. External Links: Link, Document Cited by: §1, §2.
Comments
There are no comments yet.