# Relaxing monotonicity in endogenous selection models and application to surveys

This paper considers endogenous selection models, in particular nonparametric ones. Estimating the law of unselected (or censored or unobserved) outcomes or the unconditional one is feasible when one uses instrumental variables. Using a selection equation which is additively separable in a one dimensional unobservable has the sometimes undesirable property of instrument monotonicity. We present models and nonparametric identification results allowing for non instrument monotonicity and which are based on nonparametric random coefficients indices. We apply these results to inference on nonlinear statistics such as the Gini index in surveys when the nonresponse is not missing at random.

## Authors

• 6 publications
11/17/2021

05/25/2021

### Nonparametric classes for identification in random coefficients models when regressors have limited variation

This paper studies point identification of the distribution of the coeff...
08/26/2020

### Assessing Impact of Unobserved Confounders with Sensitivity Index Probabilities through Pseudo-Experiments

Unobserved confounders are a long-standing issue in causal inference usi...
04/16/2020

### Identification of a class of index models: A topological approach

We establish nonparametric identification in a class of so-called index ...
07/25/2021

### Adaptive Estimation and Uniform Confidence Bands for Nonparametric IV

We introduce computationally simple, data-driven procedures for estimati...
08/30/2018

### Bayesian Model Averaging for Model Implied Instrumental Variable Two Stage Least Squares Estimators

Model-Implied Instrumental Variable Two-Stage Least Squares (MIIV-2SLS) ...
01/29/2019

### Automated Prototype for Asteroids Detection

Near Earth Asteroids (NEAs) are discovered daily, mainly by few major su...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

Empirical researchers often face a missing data problem. This is also called selection or censoring. Due to missing data, the observed data on an outcome variable corresponds to draws from the law of the outcome conditional on nonmissingness. Most of the time, the law of interest is the unconditional one. But the researcher can also be interested in the law of the outcome variable for the population that does not reveal the value of the outcome (the censored one). For example, surveys rely on a sample drawn at random and the estimators require the observation of all sampled units. In practice, there is missing data and those estimators cannot be computed. A common practice is to rely on imputations. This means that the missing observations are replaced by artificial ones so that the estimator can eventually be computed. In the presence of endogenous censoring, the law conditional on censoring is the important one for imputation.

It is usual to assume that the data is Missing at Random (henceforth MAR, see [12]) in which case there are variables which are never missing such that the law of the outcome conditional on them and nonmissingness is the same as the law of outcome conditional on them and missingness. Under such an assumption, the estimable conditional law is the same as the one which is unconditional on missingness. As a consequence, the researcher does not need a model for the joint law of the outcome and selection and the selection can be ignored. In survey sampling, the sampling frame can be based on variables available for the whole population, for example, if it involves stratification. In this case, those variables are natural candidates for conditioning variables for MAR to hold. In practice, there is noncompliance. It means that the researcher often does not have observations for all sampled units. Though the original sampling law is known, the additional layer of missing data can be viewed as well as a selection mechanism conditional on the first one. The law of this second selection mechanism is unknown to the statistician. Oftentimes it can be suspected that units reveal the value of a variable partly depending on the value of that variable and the MAR assumption does not hold. This is a type of endogeneity issue commonly studied in econometrics. For example, wages are only observed for those who work. Firms only carry out investment decisions if the net discounted value is nonnegative. An individual might be less willing to answer a question on his salary because it is not a typical one (either low or high). We expect a strong heterogeneity in the mechanism that drives individuals to not reveal the value of a variable.

When the MAR assumption no longer holds, the selection mechanism cannot be ignored. Identification of the law of the outcome or the law conditional on missingness usually relies on the specification of a model for the vector formed by the outcome and a model for the selection. The alternative approach is to follow the partial identification route and recognize that the parameters of interest which are functionals of these laws lie in sets. The Tobit and generalized Tobit models (also called Heckman selection model, see

[11]) are classical parametric selection models to handle endogenous censoring. The generalized Tobit model involves a system of two equations: one for the outcome and one for the selection. Each of these equations involve an error term and these errors are dependent, hence the endogeneity. Identification in such systems relies on some variables which appear in the selection equation and are not measurable with respect to the sigma-field generated by the variables in the outcome equation, and which do not have an effect on the errors. So these variables have an effect on the selection but not on the outcome. They are called instrumental variables or simply instruments.

This paper presents nonparametric models in sections 3. We explain in Section 4 that having a one dimensional error term appearing in an additively separable form in the selection equation implies the so-called instrument monotonicity. Instrument monotonicity has been introduced in [2]. It has a strong identification power but at the same time leads to unrealistic selection equations as we detail in Section 4. To overcome this issue, we present in Section 5 selection equations where the error in the selection equation is multidimensional and appears in a non additively separable fashion. The baseline specification is a model where the selection equation involves an index with random coefficients. We show that we can rely on a nonparametric model for these random coefficients. Finally, Section 6

presents a method to obtain a confidence interval around a nonlinear statistic like the Gini index with survey data in the presence of non MAR

111The terminology nonignorable (see [12]

) is also used but strictly speaking it is defined for parametric models and requires parameter spaces to be rectangles. This is why we do not use this terminology in this paper.

missing data when we suspect that some instruments are nonmonotonic. These confidence intervals account for both the uncertainty due to survey sampling and the one due to missing data.

## 2. Preliminaries

Bold letters are used for vectors and matrices and capital letters for random elements. denotes the indicator function, the derivative with respect to the variable , the inner product in the Euclidian space, the euclidian norm, the spherical measure on the unit sphere in the Euclidian space. We write when we want to make clear that the Euclidian space is . We write a.e. for almost everywhere.

All random elements are defined on the same probability space with probability

and is the expectation. The support of a function or random vector is denoted by . We denote by the support of the conditional law of given when it makes sense. For a random vector , is its density with respect to a measure which will be clear in the text and is its dimension. We use the notation for a conditional density and for the conditional expectation function evaluated at

. Equalities between random variables are understood almost surely. Random vectors appearing in models and which realisations are not in the observed data are called unobservable.

## 3. Models with One Unobservable for Endogenous Censoring

In this paper, the researcher is interested in features of the law of a variable given . She has censored observations of , uncensored observations of a vector of which is a subvector, and

is a binary variable equal to 1 when

is not censored and else is 0. Inference on the conditional law of given is possible if and are independent given , namely if, for all bounded continuous function ,

 (1) E[ϕ(Y)R|W]=E[ϕ(Y)|W]E[R|W]

in which case

 (2) E[ϕ(Y)|W]=E[ϕ(Y)|W,R=1]

and we conclude by the law of iterated expectations. Condition (1) is called Missing at Random. When it holds without the conditioning on , it is called Missing Completely at Random (MCAR, see [12]).

We consider cases where the researcher does not know that a specific uncensored vector is such that (1) holds. Then is partly based on , even conditionally. This situation is called Not Missing at Random (NMAR, see [12]). In the language of econometrics, this is called endogenous censoring or selection.

Important parametric models rely on as a model equation for the variable of interest, and are unknown parameters, and are independent, and is a standard normal random variable. In the Tobit model, for a given threshold . In the Heckman selection model (see [11])

 (3) R=1{Z⊤γ−ER>0}, (4) Z is a subvector of W, (5) (EY,ER) and X,Z are % independent, (6) (EY,ER)⊤ is a mean zero gaussian vector with % covariance matrix (1ρρ1).

(3) is called the selection equation. The law of given and , hence of given is identified and the model parameters can be estimated by maximum likelihood. Some functionals of the conditional law of given can be estimated for some semi-parametric extensions. For example, the conditional mean function can be obtained by estimating a regression model with an additional regressor which is a function of . This leads to the interpretation that the endogeneity can be understood as a missing regressor problem.

A more general model is

 (7) R=1{π(Z)>H}, (8) Z is independent of (H,Y) given X, (9) ∀x∈supp(X), the law% of H given X=x is uniform on (0,1), (10) ∀x∈supp(X), supp(π(Z)|X=x)=[0,1].

Equation (7) is the selection equation or missing mechanism. This model is quite general and clearly . By applying the nondecreasing CDF of on both sides of the inequality, it yields the same conditional law of given as

 R=1{g(Z)>ER}, Z is independent of (ER,Y) given X, ER

where and the law of are unknown.

###### Remark 1.

If we replace (8) by and are independent given , assumption MAR holds by taking a vector which components are those of and .

Condition (8) allows for dependence between and and to be partly based on , even conditionally. It provides an alternative identification strategy. Indeed we can check that, for all bounded continuous function ,

 (11)

This is a key element to obtain the law of given because

 (12) E[ϕ(Y)|X] =∫10∂hE[ϕ(Y)R|X,π(Z)=h]dh =E[ϕ(Y)R|X,π(Z)=1]−E[ϕ(Y)R|X,π(Z)=0] (13) =E[ϕ(Y)R|X,π(Z)=1].

But (11) also allows to obtain the law of given and censorship ().

 (14) E[ϕ(Y)|X,R=0]=∫1π(Z)∂hE[ϕ(Y)R|X,π(Z)=h]dh.
###### Remark 2.

Similar computations are given for a binary treatment effect model in [1] for effects that depend on an average (i.e. for all ) rather than the whole law as above. There the integrand is called the local instrumental variable.

The vector is called a vector of instrumental variables. By (8), has a direct effect on via which is non trivial but it does not have an effect on given .

Condition (10) is strong. First, the support of should be infinite so in practice we think that at least a variable in is continuous. Second, the variation of should be large enough to move the selection probability from 0 to 1. For all , there should exist a fraction of the population (based on the value of their ) who reveal their with probability larger than and a fraction of the population who do not reveal their with probability larger than . This is a ”large support” assumption. Using (13) for identification is called identification at infinity. It does not deliver an efficient method for estimation because it would make use of the subsample for which is close to 1. In contrast, (12) can be used to form estimators which use all the data. (9) and (10) were not required in the parametric Tobit and Heckman selection models. The task of finding which satisfies (8) was already difficult but working with the nonparametric model requires those additional stringent assumptions.

## 4. Monotonicity

In this section, we show that the above nonparametric specification is not as general as we would think. From a modelling perspective, it is equivalent (see [14]) to the so-called instrument monotonicity introduced in [2].

For the sake of exposition, assume that is discrete. For and individuals that we index by , such that , we have . Suppose now that we could change exogeneously (by experimental assignment) to in leaving unchanged the unobserved characteristics for . The corresponding of those individuals are shifted monotonically. Indeed, we have either (1) or (2) . In case (1),

 ∀i∈I(z), 1{π(z)>Hi}≤1{π(z′)>Hi}

while in case (2),

 ∀i∈I(z), 1{π(z)>Hi}≥1{π(z′)>Hi}.

This instrument monotonicity condition has been formalized in [2].

Consider a missing data problem in a survey where , is the identity of a pollster, and when the surveyed individual replies and else . The identity of the pollster could be Mr A (z=0) or Mrs B (z=1). This qualifies for an instrument because, usually, the identity of the pollster can have an effect on the response but not on the value of the surveyed variable. If the missing data model is any from Section 3 and pollster B has a higher response rate than pollster A, then in the hypothetic situation where all individuals surveyed by Mr A had been surveyed by Mrs B, then those who responded to Mr A respond to Mrs B and some who did not respond to Mr A respond to Mrs B, but no one who responded to Mr A would not respond to Mrs B. This last type of individuals corresponds to the so-called defiers in the terminology of [2]: those for which when and when . There, instrument monotonicity means that there are no defiers.

###### Remark 3.

The terminology also calls compliers those who did not respond to Mr A but who would respond to Mrs B, never takers those who would respond to neither, and always takers those who would respond to both.

The absence of defiers can be unrealistic. For example, some surveyed individuals can answer a pollster because they feel confident with him/her. They can share the same traits which the statistician do not observe. For example, in the conversation they could realize they share the same interest or went to the same school.

## 5. A Random Coefficients Model for the Selection Equation

[14] showed that monotonicity is equivalent to modelling the selection equation as an additively separable latent index model with a single unobservable. In (7) the index is and is the unobservable. A nonadditively separable model takes the form . [1] calls a benchmark nonadditively separable model with multiple unobservables a selection model where the selection equation is a random coefficients binary choice model. A random coefficients latent index model takes the form , where and are independent. The multiple unobservables are the coefficients and play the role of above. The model is nonadditively separable due to the products. The random intercept absorbs the usual mean zero error and deterministic intercept. The random slopes can be interpreted as the tastes for the characteristic . The components of can be dependent.

To gain intuition, assume that is discrete. For and individuals such that , we have

 Ri=1{Ai+B⊤iz>0}.

Suppose that the first component of takes positive and negative values with positive probability, that we change exogeneously to in by only changing the first component, and that we leave unchanged the unobserved characteristics for . This model allows for populations of compliers (those for which the first component of is positive) and defiers (those for which the first component of is negative).

A parametric model for a selection equation specifies a parametric law for . A parametric model for a selection model specifies a joint law of given . The model parameters can be estimated by maximum likelihood. The components of given could be modelled as dependent. is a vector of latent variables and the likelihood involves integrals over

. As for the usual Logit or Probit models, a scale normalization is usually introduced for identification. Indeed

for all . A nonparametric model allows the law of given to be a nonparametric class. Parametric and nonparametric models are particularly interesting when they allow for discrete mixtures so that there can be different groups of individuals such as the compliers, defiers, always takers, and never takers. But estimating a parametric model with latent variables which are drawn from multivariate mixtures can be a difficult exercise. In contrast, nonparametric estimators can be easy to compute.

### 5.1. Scaling to Handle Genuine Non Instrument Monotonicity

In this section, we rely on the approach used in the first version of [5] in the context of treatment effects models. This is based on the normalisation in [9, 10]. The vector of instrumental variables is of dimension . For scale normalization, we define

We introduce some additional notations. When is an integrable function on , we denote by the function (by a density argument) and the hemispherical transform (see [13]) of is defined as

 ∀s∈S, H[f](s)=∫θ∈S: ⟨s,θ⟩≥0f(θ)dσ(θ).

This is a circular convolution in dimension

 ∀φ∈[0,2π), H[f](φ)=∫φ∈[0,2π): cos(φ−θ)≥0f(θ)dθ.

The null space of consists of the integrable functions which are even (by a density argument) and integrate to 0 on . is injective when acting on the cone of nonnegative almost everywhere functions in or such that a.e. (see [9, 10]). This means that cannot be nonzero at two antipodal points of . We denote by the unbounded inverse operator. We now present a formula for the inverse. For an integrable function , we denote by the function . If is continuous and , then

 (15) f=2f−1{f−>0}

and, if , then

 (16) f−(θ)=∑p∈N01λ2p+1,dL(2p+1,d)∑l=1∫Sd−1q2p+1,d(θ⊤s)g(s)dσ(s),

where

 λ1,d=|Sd−2|d−1, ∀p∈N, λ2p+1,d=(−1)p|Sd−2|1⋅3⋯(2p−1)(d−1)(d+1)⋯(d+2p−1),
 L(k,d)=(2k+d−2)(k+d−2)!k!(d−2)!(k+d−2), qk,d(t):=L(k,d)C(d−2)/2k(t)|Sd−1|C(d−2)/2k(1),

for all and , are orthogonal polynomials on for the weight . The Gegenbauer polynomials can be obtained by the recursion , for while , and

 (k+2)Cμk+2(t)=2(μ+k+1)tCμk+1(t)−(2μ+k)Cμk(t).
###### Remark 4.

Other inversion formulas when

is restricted to odd functions or measures rather than the above cone are given in

[13].

We assume

 (17) P(Γ=0|X)=0, (18) R=1{Γ⊤S>0}, (19) S is independent of (Γ⊤,Y) given X, (20) ∀x∈supp(X), supp(S|X=x)={s∈S: s1≥0}, The conditional law of Γ given X is absolutely continuous (21) with respect to σ and the density belongs to L1(S)∩L∞(S), (22) For a.e. Γ∈S and x∈supp(X), fΓ|X=xˇfΓ|X=x(γ)=0.

This specification allows for non instrument monotonicity for all instruments. Condition (20) is very demanding because it means that is the whole space for all . For further reference, we use the notation . This can be relaxed as in [6] by working in specific nonparametric classes yielding quasi-analyticity.

###### Remark 5.

Proceeding like in [6, 7] we could allow an index of the form where are instrumental variables and is multidimensional of arbitrary dimension but has a sparse random series expansion on some classes of functions. Also, the conditional law of , given , for all , can have a support which is a subspace of the whole space. This means that a nonparametric random coefficients linear index already captures a large class of nonadditively separable models with multiple unobservables.

We can show using (19), (20), and (21), that for a.e. and ,

 (23) E[ϕ(Y)R|X=x,S=s]=H[E[ϕ(Y)|X=x,Γ=⋅]fΓ|X=x](s).

By (23), backing out is an inverse problem. However, there is a particular difficulty which is that the left-hand side is only defined (and estimable) on . We obtain the following theorem which states that can be identified at infinity.

###### Theorem 1.

Maintain (17)-(22). For all on the boundary of ,

 E[ϕ(Y)|X=x]=lims→~s, s∈H+E[ϕ(Y)R|X=x,S=s]+lims→−~s, s∈H+E[ϕ(Y)R|X=x,S=s].
###### Proof.

By (23), we have

 (24) E[ϕ(Y)R|X=x,S=s]=12E[ϕ(Y)|X=x]+H[E[ϕ(Y)|X=x,Γ=⋅]fΓ|X=x]−(s).

Now is a continuous function and, by (24),

 H[(E[ϕ(Y)|X=x,Γ=⋅]fΓ|X=x)−](s) =H[E[ϕ(Y)|X=x,Γ=⋅]fΓ|X=x]−(s) =E[ϕ(Y)R|X=x,S=s]−12E[ϕ(Y)|X=x],

hence the conclusion. ∎

Denote by the continuous and odd function defined, for all in the interior of , by

 gϕ(s)=E[ϕ(Y)R|X=x,S=s]−12E[ϕ(Y)|X=x],

by for all on the boundary of , and by for all in the interior of . This function is nonparametrically identified by Theorem 1.

By (24), for all ,

 (25) gϕ(s)=H[(E[ϕ(Y)|X=x,Γ=⋅]fΓ|X=x)−](s).

This is now a bona-fide ill-posed inverse problem and the inversion can be obtained by (15)-(16).

###### Theorem 2.

Maintain assumptions (17)-(22). For a.e. ,
is given by applying (15)-(16) with, for all ,

 ∫Sd−1q2p+1,d(θ⊤s)gϕ(s)dσ(s)=E[q2p+1,d(θ⊤S)fS|X=x(S)ϕ(Y)R].
###### Proof.

This is because

 ∫Sd−1q2p+1,d(θ⊤s)E[ϕ(Y)R|X=x,S=s]dσ(s)=E[q2p+1,d(θ⊤S)fS|X=x(S)ϕ(Y)R]

and, for all ,

 ∫Sd−1q2p+1,d(θ⊤s)dσ(s)=0.

As a result, the parameter in Theorem 2 is nonparametrically identified and the argument does not involve identification at infinity. This gives, by integration, an other expression for than that of Theorem 1 which does not rely on identification at infinity. By taking to be the function identically equal to 1, we obtain for a.e. .

From this expression, one can obtain an estimator by plug-in and smoothing. One possible smoothing technique is to replace the sum over by a sum up to a truncation parameter. In the approach in [9], there is an additional damping of the high frequencies by an infinitely differentiable filter with compact support. The needlet estimator in [10] also builds on this idea. In the case of the estimation of , [10] provides the minimax lower bounds for more general losses and an adaptive estimator based on thresholding the coefficients of a needlet expansion with a data driven level of hard thresholding.

The root nonparametrically identified in Theorem 2 allows to obtain the law of given and censorship ()

 (26) E[ϕ(Y)|X,R=0]=∫Γ,s∈S: Γ⊤s≤0E[ϕ(Y)|X=x,Γ=Γ]fΓ|X=x(γ)fS|X=x(s)dσ(γ)dσ(s),

where is nonparametrically identified. Estimation can be carried by the plug-in principle.

### 5.2. Alternative Scaling Under a Weak Version of Monotonicity

In this section, we denote by the general linear group over and assume

 (27) For a.e. x∈supp(X), ∃Px∈GL(d−1): (P⊤xB)1>0 a.s.

We denote by , , , and . This yields

 A+B⊤Z>0⇔V−Θ−¯¯¯¯Γ⊤¯¯¯¯Z>0.

Assume also (19),

 (28) For a.e. (x⊤ ¯¯¯z⊤)∈supp(X⊤ ¯¯¯¯Z⊤), fΘ+¯¯¯Γ⊤¯¯¯z|X=x exists,
 (29) supp(V|X=x,¯¯¯¯Z=¯¯¯z) has a nonempty interior,
 ∀t∈R, for a.e. (x⊤ ¯¯¯z⊤)∈supp(X⊤ ¯¯¯¯Z⊤), (30) u→g(x,¯¯¯z,u)=E[eitY∣∣X=x,Θ+¯¯¯¯Γ⊤¯¯¯z=u]fΘ+¯¯¯Γ⊤¯¯¯z|X=x(u) is analytic,
 (31)
 (32) For a.e. x∈supp(X), the % interior of supp(¯¯¯¯Z|X=x) is nonempty.

Condition (31) is slightly stronger than necessary. Conditions implying that certain functions are quasi-analytic, hence allowing to have some heavy tails, are sufficient (see [6]).

By (19),

 v→P(R=1|X=x,Z=Px(v ¯¯¯z)⊤)=P(−Θ+¯¯¯¯Γ⊤¯¯¯z

is the cumulative distribution function of a linear functional of a random vector and for all

in the interior of ,

 ∂vP(R=1|X=x,Z=Px(v ¯¯¯z)⊤)=fΘ+¯¯¯Γ⊤¯¯¯z|X=x(v).

So such invertible matrices are identified.

The vector of random coefficients in the linear index structure clearly satisfies (22). For this reason, we consider the specification of the previous section more general. There is instrument monotonicity in , though not for . This is a weak type of monotonicity because it is possible that there is instrument monotonicity for none of the instrumental variable in the original scale. This is the approach presented in the other versions of [5]. It is shown in [5] that the equation

where , and are unknown functions, can also be transformed by reparametrization into

 (33) R=1l{V−Θ−¯¯¯¯Γ⊤¯¯¯¯Z>0}

and that the unknown functions are identified by similar arguments as for the additive model for a regression function.

###### Theorem 3.

Maintain (7) and (28)-(32). For a.e. , the law of conditional on is identified.

###### Proof.

Let . We have by (19), for a.e. ,

 E[eitYR∣∣X=x,V=v,¯¯¯¯Z=¯¯¯z]=∫v−∞E[eitY∣∣X=x,Θ+¯¯¯¯Γ⊤¯¯¯z=u]fΘ+¯¯¯Γ⊤¯¯¯z|X=x(u)du.

Hence, by (30), is nonparametrically identified. Moreover, for all and a.e. ,

 (34) ∫Reisv∂vg(x,¯¯¯z,v)dv=E[eitY+sΘ+¯¯¯Γ⊤(s¯¯¯z)∣∣∣X=x],

the left-hand side is nonparametrically identified and the right-hand side is the Fourier transform of the law of

conditional on at . We conclude by (31) and (32). ∎

It is possible to turn the identification argument using (34) into an estimation procedure as in [7].

###### Remark 6.

Proceeding like in [6, 7] allows to work with an index of the form where is multidimensional of arbitrary dimension and has a sparse random series expansion on some classes of functions and the conditional laws of and , given , for all , can have a support which is a subspace of the whole space.

###### Remark 7.

The techniques in [8], which are used in [7], also allow to estimate by a simple series estimator, under proper integrability, for almost every even if we observe only when it falls in an interval which is a proper subset of (a type of censoring) if is analytic.

###### Remark 8.

In a binary treatment effect model the outcome can be written as . and are the potential outcomes without and with treatment. They are unobservable. A selection model can be viewed as a degenerate case where a.s. Quantities similar to the root in Theorem 2 have been introduced in [5]. They are for the marginals of the potential outcomes for . An extension of the Marginal Treatment Effect in [1] to multiple unobservables and for laws is the Conditional on Unobservables Distribution of Treatment Effects .

## 6. Application to Missing Data in Surveys

When making inference with survey data, the researcher has available data on a vector of characteristics for units belonging to a random subset of a larger finite population . The law used to draw can depend on variables available for the whole population, for example from a sensus. We assume that the researcher is interested in a parameter which could be computed if we had the values of a variable for all units of index . This can be an inequality index, for example the Gini index, and the wealth of household . In the absence of missing data, the statistician can produce a confidence interval for , making use of the data for the units and his available knowledge on the law . We assume that the cardinality of is fixed and equal to . When

is a total, it is usual to rely on an unbiased estimator, an estimator of its variance, and a Gaussian approximation. For more complex parameters, linearization is often used to approximate moments. The estimator usually rely on the survey weights

. For example an estimator of the Gini index is

 (35) ˆg((yi)i∈S)=∑ni=1(2^r(i)−1)πiyi∑ni=1πi∑ni=1πiyi−1,

where . The estimators of the variance of the estimators are more complex to obtain and we assume there is a numerical procedure to obtain it. Inference is based on the approximation

 (36) ˆg((yi)i∈S)≈g+√ˆvar(ˆg)((yi)i∈S)ϵ,

where is a standard normal random variable and is an estimator of the variance of .

In practice, this is not possible when some of the s are missing. There is a distinction between total nonresponse, where the researcher discards the data for some units or it is not available, and partial nonresponse. Let us ignore total nonresponse which is usually dealt with using reweighting and calibration and focus on partial nonresponse. We consider a case where can be missing for some units , while all other variables are available for all units . We rely on a classical formalism where the vector of surveyed variables and of those used to draw , for each unit , are random draws from a superpopulation. In this formalism the parameter for all indices of households in the population and are random and we shall now use capital letters for them. Let and be random variables, where if and if unit reveals the value of given , and and be random vectors which will play a different role.

It is classical to rely on imputations to handle the missing data. This means that we replace missing data by artificial values obtained from a model forming predictions or simulating from a probability law and inject them in a formula like (35). In [3] we discuss the use of the Heckman selection model when we suspect that the data is not missing at random. This relies on a parametric model for the partially missing outcome which is prone to criticism. Also as this paper has shown such a model relies on instrument monotonicity which is an assumption which is too strong to be realistic.

It is difficult to analyze theoretically the effect of such imputations. For example when the statistic is nonlinear in the s (e.g. (35)) then using predictions can lead to distorted statistics. It is also tricky to make proper inference when one relies on imputations. One way to proceed is to rely on a hierarchical model as in [4]

. There the imputation model is parametric and we adopted the Bayesian paradigm for two reasons. The first is to account for parameter uncertainty and the second is to replace maximum likelihood with high dimensional integrals by a Monte Carlo Markov Chain Algorithm (a Gibbs sampler). The hierarchical approach also allows layers such as to model model uncertainty. The Markov chain produces sequences of values for each

for in the posterior distribution given , the choice of which is discussed afterwards. Subsequently we get a path of

 (37) ˜G=ˆG((Yi)i∈S)+√ˆV(ˆG)((Yi)i∈S)ϵ

where is a standard normal random variable independent from given . (38) is derived from (36). The variables are those making the missing mechanism corresponding to relative to MAR222They can be those used by the survey statistician to draw if any (and usually made available) to handle a total nonresponse which is MAR via imputations.. The last values of the sample path for allows to form credible sets by adjusting the set so that the frequency that exceeds , where is a confidence level.

is the so-called burn-in. These confidence sets account for error due to survey sampling, parameter uncertainty, and nonresponse. They can be chosen from the quantiles of the distribution, to minimize the volume of the set, etc.

We now consider our nonparametric model of endogenous selection which allows for nonmonotonicity of the instrumental variables to handle a missing mechanism corresponding to which is NMAR. For simplicity, we assume away parameter uncertainty and total nonresponse. The variables in Section 5 can be variables that are good predictors for . They are not needed to obtain valid inference but can be useful to make confidence intervals smaller. However, the selection corresponding to the binary variables relative to the outcomes given follow a NMAR mechanism. The (multiple) imputation approach becomes: for

1. Draw an i.i.d. sample of for from the law of given , , and , an independent standard normal , and set for where are the uncensored observations,

2. Compute

 (38) ˜Gt=ˆG((Yti)i∈S)+√ˆV(ˆG)((Yti)i∈S)ϵt.

The confidence interval is formed from the sample for a given confidence level.