I Introduction
While in standard regression we want to estimate the conditional expected value, in some situations we need to predict the entire probability distribution. In ARMA/ARCH modelling
[1], or data compression like JPEGLS [2], it is resolved by predicting the most likely value, usually using a linear combination of neighboring values, then assuming some parametric distribution of error from this prediction  for example as Gaussian in ARMAlike models or Laplace distribution in data compression. Widths of such distributions are often chosen based on the context (e.g. in ARCH, JPEGLS).However, in situation like credibility evaluation of tax declarations, the conditional distribution of the main variable to verify (declared income) is quite complex, as we will see in the presented analysis, and we need to model dependencies with multiple categorical variables  their properties might affect not only the expected value, but also higher moments. Such dependence between expected values of two variables is described by correlation coefficient, between variances e.g. in ARCH model, HCR used here allows to systematically exploit such dependencies of any moments between two or more variables.
To evaluate credibility of a given (exogenous) variable , we would like to predict its conditional probability density of based on the remaining (endogenous) variables  intuitively, the higher such density is, the higher credibility of this value. However, this intuition requires some normalization, as for example tails of distributions have lower density what should not be interpreted as lower credibility  these values are just spread over wider range.
A natural normalization, used for example in copula theory [3], is to nearly uniform marginal distribution on . We can use empirical distribution function (EDF, by sorting obtained values) to transform the original variable to from nearly uniform distribution. Using this normalization, modelled density of can be seen as evaluation of credibility, its examples are presented in Fig. 1.
To model we will use hierarchical correlation reconstruction (HCR) approach ([4, 5, 6, 7]
): model density of joint distribution of all variables
as a linear combination of orthonormal polynomials:(1) 
where satisfy .
Such coefficients have cumulantlike interpretation. For example has similar behavior as expected value of the first variable, as variance of the second variable. We can also use mixed coefficients, for example describes dependence between expected values of the first two variables  has similar interpretation as correlation coefficient. We can model complex joint density with such polynomial approximation, especially that meansquare estimation of these coefficient turns out very inexpensive and can be calculated independently [4]: using orthonoromal basis, coefficient of a given function turns out just average of this function over the sample: for data points.
For credibility evaluation, after normalizing all marginal distributions to nearly uniform on , we would like to estimate their joint density as such linear combination  using a chosen basis, for example exploiting only pairwise dependencies (up to two nonzero indexes). Then substituting coordinates to such joint distribution, and normalizing to integrate to 1, we get probability density , describing credibility of this value.
However, above HCR approach is appropriate for continuous variables, while here we have many discrete  there will be suggested adaptation for this situation. Finally, for simplicity and interpretability there will be just used leastsquares regression to optimize linear dependencies between properties of endogenous variables and cumulantlike coefficients of the predicted variable:
with coefficients chosen by leastsquare optimization of .
For evaluation/calibration there will be used plot of sorted for all points from the dataset, presented in Fig. 1
. Its horizontal axis can be seen as quantile regarding modelled credibility. For example choosing that we want to manually verify 1% of least credible data points, we can read the density threshold (
) from this plot and verify only those below this threshold.Ii Dataset
The dataset is composed of observations on 37215 households collected annually by the Central Statistical Office of Poland (GUS). There will be used the following variables (all but the first are endogenous  they are used to predict the first variable which is exogenous; variable names are provided in brackets):

Continuous: equivalent monthly income^{1}^{1}1It is calculated as the household disposable income divided by a respective equivalence scale i. e. an indicator supposed to measure impact of demographic variables on cost of living. In the present study simple OECD 70/50 formula is employed: scale = 1 + (number of adults minus one) 0.7 + (number of children) 0.5. (inceq) in PLN  the exogenous variable, remaining equivalent cash in PLN at the end of month (casheq), shares of expenditures on luxury goods and food (luxury and food, respectively),

Discrete ordinal (the number of distinct values is provided in brackets): age of the household head in years (age, 87 values), his/her completed education level (edu, 11), number of persons (pers, 14), number of younger (child1, 8) and older (child2, 7) children in the household, urbanisation level (urb, 3), type of residence (loc, 6), number of cars (cars, 7), month of the query (month, 12).

Discrete categorical: main income source (source1, 12) and additional source (source2, 13) , voivodship (voi, 16), building type (build, 4), its ownership type (own, 6), age of the newest car (carage, 5), subjective evaluations of: change in the material position (chg, 5), income sufficiency (sf, 5), level of satisfaction of needs for food (sff, 5), clothing (sfc, 5), health care (sfh, 6), housing fees (sfs, 6), housing equipment (sfq, 6), culture (sfl, 7), education (sfe, 6), tourism and recreation (sft, 6).

Binary: sex of the household head, subjective evaluation whether the dwelling is too small or too large, presence in the household of persons: with tertiary and secondary education, unemployed, handicapped.
The model will use linear combinations of their properties. Age will be treated as continuous variable, their pairwise dependencies can be seen in Fig. 2, their cumulantlike proprieties are used in linear combination for prediction. To avoid arbitrary choice of weights, all the remaining variables are treated as binary: split into 0/1 variables, as many as the number of distinct values, being 1 if the category agrees, 0 otherwise.
Iii Normalization and orthognonal polynomial basis
For many reasons it is convenient to normalize variables to have nearly uniform marginal distributions: to see them as quantiles  interpreting ranges as population percentages, to directly interpret predicted density as credibility here, finally to use polynomials for density estimation with normalized coefficients. In previous applications of HCR ([6, 7]
), there was used CDF (cumulant distribution function) of approximated parametric distribution (Laplace) for this normalization  approximating general behavior, then modelling with polynomial corrections from this idealization, which can evolve in time for nonstationary time series. Here probability distribution is stationary and too complex for parametric distributions, hence, like in copula theory
[3], we will directly use EDF for this normalization.Iiia Normalization
Having sample, we can normalize it with EDF by sorting the values  finding order (bijection) such that:
(2) 
Hence, is in th position of this order  wanting them to have nearly uniform distribution on , a natural choice is .
However, especially for discrete variables, (2) has many equalities, what needs special treatment  there is no base to choose an order among equal values, all of them should be transformed into the same value , naturally chosen as the center of such range  we can see one such horizontal line for casheq = 0 in Fig. 2, and age sample consisting only of horizontal lines due to rounding to complete years.
Finally, the used generalized formula (working for both continuous and discrete variables) is:
(3) 
We can use this formula to normalize each variable of : separately for each lower index, getting , which for continuous variables have nearly uniform marginal distribution on . However, for discrete variables it is a step function, especially for binary  needing a completely different treatment.
There are ways to transform discrete variables into a smaller number of nearly independent continuous variables  for example by choosing a set of orthonormal projections. This choice is sometimes done randomly, but it often can be optimized by dimensionality reduction techniques like PCA (principal component analysis) or its discrete analogues like MCA (multiple correspondence analysis)
[8]. Such projections often have nearly Gaussian distribution, but we would loose interpretability this way  this article presents analysis directly using discrete variables to get better interpretation of obtained coefficients (Fig.
4), however, it might be worth to explore also methods like MCA to directly work on continuous variables.IiiB Orthogonal polynomial basis and HCR
Assuming (normalized) variable is from nearly uniform distributions on , it is very convenient to represent its density with polynomial: . Using orthonormal basis , meansquare optimization leads to [4] inexpensive estimation by just averaging: .
The first five of these polynomials (rescaled Legendre) are , and correspondingly:
For normalization we need . The term shifts the expected value toward left or right. Positive term increases probability of extreme values, reducing it for central values  has analogous behavior as variance. And so on: coefficient has similar interpretation as th cumulant. Using degree polynomial: corresponds to modelling distribution using the first moments, additionally directly getting density estimation from them.
We can also exploit statistical dependencies between two or more variables this way  by analogously modelling joint distribution on using product basis: . This way represents mixed cumulants  their dependencies between multiple variables. For a large number of variables, most of coordinates of used j should be 0  coefficients with single nonzero coordinate describe probability distribution of corresponding variable, with two nonzero describe pairwise dependencies and so on  getting hierarchical correlation reconstruction (HCR) of a given distribution.
We could directly use such modelled joint density for credibility evaluation by just substituting and normalizing obtained polynomial of to integrate to 1. This way is expressed as nearly a linear combination of various products of
. However, this approach has difficulties with discrete values  hence, there is finally used linear regression to directly optimize coefficients of such linear combinations.
Orthogonal polynomial basis allows also for a different perspective: for a given value, we can take its coordinates in this basis: . As estimated is just average over such corresponding coordinates, we can imagine e.g. as contribution of this point to expected value, to variance etc. Hence, we can use as a list of properties of a given data point to infer its other property, exploiting nonlinear dependencies this way (for )  we will use such properties in linear regression for prediction here.
There is generally a large freedom of choice for properties used in inference. For example for age variable here, a standard approach is dividing into age ranges, what can be seen as using with family of functions being 1 on a given age range and 0 otherwise. It leaves a question of sizes of these age ranges: long ranges reduce data precision, short ranges mean lower statistics and not exploiting local behavior (of neighboring ranges). Normalizing variable and using with orthonormal family of polynomials is an attempt to automatically exploit local dependencies, making each coefficient being affected by all data points. Numerical tests for age gave slightly more accurate predictions using orthonormal polynomials, than using the same number of fixed length age ranges.
IiiC Handling discrete values
Analogously to continuous variables with scalar product, we could directly work on discrete variables with product using some weight
 using basis of orthonormal vectors this time, we can we get analogous meansquare estimation as
.However, while should be chosen as marginal distribution in analogy to continuous case, choosing the weights is a nontrivial problem, planned to be explored in the future. For simplicity and to present intuition of treating as useful properties for inference, which can be imagined as th contribution to th moment, in the presented analysis there will be used linear regression to directly optimize coefficients.
Iv Used algorithm
The currently used algorithm optimizes coefficients with leastsquare regression:

All variables treated as continuous  including casheq having a large percentage of exactly 0 value, and age obtaining 87 distinct discrete values  are normalized to nearly uniform marginal distribution using formula (3).

All the remaining variables are treated as categorical and transformed into binary  thanks of it, weights of individual categories are optimized in later regression. For example edu(cation) obtains 11 different values, hence it is transformed into 11 binary variables: each being 1 if category agrees, 0 otherwise.

Denote as vector built of all properties of endogenous variables (normalized or not) directly used for prediction as a linear combination  here it has 223 coordinates visualized in Fig. 4. Its zeroth coordinate is fixed as 1 to get constant term in later regression. Then for variables treated as continuous, it contains for up to a chosen degree, which in 9 here  for all 4 variables treated this way (casheq, luxury, food, age)  getting coordinates of . It further contains all the remaining variables  categorical transformed into binary, and the original binary variables  both use only 0 or 1 values.

Build e.g. matrix with rows being applied function to all data points.

For each property we would like to use least square linear regression to infer for to predict density as degree polynomial. For this purpose, build vectors , then find vectors minimizing . It can be realized with pseudoinverse, and is implemented in many numerical libraries, e.g. as ”LeastSquares[M,b]” in Wolfram Mathematica. Values of these final used coefficients are visualized in Fig. 4.

Now predicted density is
This predicted density is for exogenous variable normalized to nearly uniform marginal distribution on  what allows to use it directly for credibility evaluation.
Sorting such predicted densities of actual values, we get evaluation/calibration plot presented in Fig. 1 for various used degree . Inconvenience of parameterizing density as polynomial is sometimes obtaining negative value, what need proper interpretation. It happens frequently in predictions in Fig. 3, but only in of cases in this plot of sorted predictions  confirming prediction and allowing to exclude such negative density regions with near certainty. Predicted negative density suggests disagreement of this point with sample statistics  low credibility of this data point.
To apply this evaluation/calibration plot, we can rescale its horizontal axis to range, allowing to interpret it as quantile in credibility evaluation. Then for example choosing a percentage of least credible data points for manual verification, we can read density threshold for this percentage, then mark for further verification all data points below this threshold (e.g. for ).
V Conclusions and further work
There was presented a simple general approach for applying HCR technique with discrete variables  on example of credibility evaluation for income data, allowing to model conditional probability distribution for (normalized) predicted variable.
It tests agreement with statistics of provided data sample  will not detect systematic improper behavior existing in this sample. Handling such situations would rather require some supervised learning, like manually marking some of suspicious data points and then evaluate probability of being in this marked set.
The main purpose of this article was methodological  presenting simple interpretable analysis, but leaving many possibilities for improvements.
For example it has exploited only pairwise dependencies  between the exogenous variable and (separately) each of endogenous variable. We can analogously include higher order dependencies by using products of considered variables in vector of above algorithm, e.g. include properties in the least square regression for 3point correlation, and analogously for higher order dependencies. As their number grows exponentially with dependency order, this choice requires some selectivity  for example can be done in hierarchical way: e.g. search for nonnegligible 3point correlations by expanding essential 2point correlations. In presented analysis the number of parameters is negligibly small comparing do sample size  there is no danger of overfitting. Otherwise, the sample should be split at least into training and evaluation set [7].
Also, the main purpose of using leastsquare regression was simplicity and interpretability  it probably can be improved, what is planned to be explored in the future.
As mentioned, another direction worth exploring is using dimensionality reduction techniques like PCA or its discrete analogue: MCA, to reduce the number of variables and make them continuous.
References
 [1] T. C. Mills and T. C. Mills, Time series techniques for economists. Cambridge University Press, 1991.
 [2] M. J. Weinberger, G. Seroussi, and G. Sapiro, “The locoi lossless image compression algorithm: Principles and standardization into jpegls,” IEEE Transactions on Image processing, vol. 9, no. 8, pp. 1309–1324, 2000.
 [3] F. Durante and C. Sempi, “Copula theory: an introduction,” in Copula theory and its applications. Springer, 2010, pp. 3–31.
 [4] J. Duda, “Rapid parametric density estimation,” arXiv preprint arXiv:1702.02144, 2017.
 [5] ——, “Hierarchical correlation reconstruction with missing data,” arXiv preprint arXiv:1804.06218, 2018.
 [6] ——, “Exploiting statistical dependencies of time series with hierarchical correlation reconstruction,” arXiv preprint arXiv:1807.04119, 2018.
 [7] J. Duda and M. Snarska, “Modeling joint probability distribution of yield curve parameters,” arXiv preprint arXiv:1807.11743, 2018.
 [8] M. Greenacre and J. Blasius, Multiple correspondence analysis and related methods. Chapman and Hall/CRC, 2006.