Humans are continuously exposed to a complex mixture of environmental chemicals. Starting with conception, these exposures can perturb the developmental process and alter children’s health outcomes. Recent research has focused on identifying critical windows of susceptibility, which are times during the developmental process when an exposure can alter a future phenotype (Wright, 2017). Identifying critical windows requires exposure data that are observed repeatedly during the developmental period of interest, with many studies focusing on gestation. Numerous studies have estimated critical windows and the exposure-time-response relationship for exposure to a single environmental chemical during pregnancy (Lee and others, 2018; Bose and others, 2017; Chiu and others, 2017). However, there remains a gap with respect to statistical methods that can identify critical windows to a mixture of multiple environmental pollutants.
A popular approach to estimate the association between maternal exposure during pregnancy and a birth outcome is the distributed lag model (DLM) (Schwartz, 2000; Zanobetti and others, 2000; Darrow and others, 2010). In a DLM, an outcome is regressed on repeated measures of exposures over a preceding time period. Compared to using exposures averaged over a pre-specified time window, such as trimester average exposures, a DLM has been shown to reduce bias and improve estimation of critical windows (Wilson and others, 2017b). The recent statistical literature has extended DLM methods in a number of directions, such as estimation of nonlinear exposure-time-response functions (Gasparrini, 2014; Gasparrini and others, 2017; Mork and Wilson, 2021b), spatially-varying distributed lag functions (Warren and others, 2012, 2020b), subgroup specific distributed lag functions (Wilson and others, 2017a), and incorporation of variable selection procedures that select time periods in a critical window (Warren and others, 2020a).
In this manuscript we focus on methods for the analysis of the health effects of a mixture of multiple environmental chemicals that are measured longitudinally. We have three primary objectives. The first is to select the mixture components and interactions that are associated with the outcome. When selecting mixture components and interactions, we are particularly focused on maximizing the power to detect components associated with the outcome while minimizing the false discovery rate. The second is to estimate the exposure-time-response relationship which includes both main effects and pairwise interactions of the mixture. Interactions between repeated measures of exposure may include interactions between two exposures at the same time point or time-lagged interactions. The third objective is to identify critical windows for each of the exposures and their interactions.
There is a large body of research on statistical methods for mixtures that are observed at a single time point. For reviews see Taylor and others (2016), Davalos and others (2017) and Gibson and others (2019). Fewer approaches have examined repeated measures of exposure to a mixture. Bivariate distributed lag surfaces were first introduced in Muggeo (2007) and Chen and others (2019) explored different approaches to modeling bivariate distributed lag surfaces. However, these approaches only consider an interaction between two components. For mixtures of more than two exposures, Liu and others (2017) proposed a lagged Bayesian kernel machine regression model for a mixture observed at a small number of time points, such as trimesters. Wilson and others (2020) proposed Bayesian kernel machine regression distributed lag models for repeated measures of a mixture observed at high temporal resolution. However, this approach is computationally demanding and can currently only be fit to small or moderately-sized datasets. Bello and others (2017)
proposed an additive DLM model using the weighted quantile sum (WQS) framework(Carrico and others, 2015) that only incorporates main effects. Mork and Wilson (2021a) proposed a flexible regression tree approach for mixtures of time varying exposures. Warren and others (2021) proposed an extension of the critical window variable selection (Warren and others, 2020a) for mixtures observed at repeated measures. Because the approach focuses on selecting critical windows, the approach allows for main effects and interactions among exposures at a single time point but does not allow for time-lagged interactions or exposure selection.
In this paper, we adopt spike-and-slab prior distributions (Mitchell and Beauchamp, 1988)
within the distributed lag framework to identify important exposures of the mixture and interactions among exposures that are associated with the outcome. This extends the literature on distributed lag modeling for environmental mixtures in a number of ways. Our approach allows for pairwise interactions among a large number of environmental exposures, while also allowing for time-lagged interactions instead of assuming they occur at a single point in time. Further, we use a novel approach to hyperparameter selection that increases the power to detect important mixture effects without greatly increasing the number of false discoveries. This is crucially important because the effects of environmental mixtures are frequently weak to moderate, and existing approaches may be underpowered to detect these signals.
2 Modeling framework
Throughout we assume the observed data are independent and identically distributed replicates of for . represents a scalar outcome. is a
-dimensional vector representing individual’s exposure to pollutant across time points, and is a -dimensional vector of covariates that could confound the association between the exposures and outcome. We restrict attention to continuous outcomes as extensions to binary outcomes are straightforward. To simplify notation, we drop the subscript for the rest of the manuscript. The main goal of our methodology will be to estimate the relationship between the exposures and the outcome conditional on . We adopt the following distributed lag model framework for the mean of the outcome:
This leads to a highly parameterized model as there are parameters and parameters . The dimension of the parameter space can be reduced in a number of ways. The first is to impose smoothness conditions on the distributed lag curves across time within a particular exposure. To achieve this, we assume a functional form for the distributed lag curves, for , instead of allowing there to be distinct parameters for each main effect function. We also assume the interaction surface parameters vary smoothly with respect to and :
where and are one and two-dimensional basis functions of time that must be chosen a priori. This yields the following representation of the model:
Notice that for exposure we have reduced the number of parameters of the main effect function from to . For the interaction between exposures and , we have reduced the number of parameters from to . In the following sections, we discuss different approaches to choosing the basis functions and the number of basis functions to include, as well as additional dimension reduction approaches for the interactions.
In fitting the model we also perform exposure selection, which is of scientific interest in environmental epidemiology. We achieve this goal using the model specification:
The prior distribution for the regression coefficients is a mixture between a point mass atand which indicate whether a particular exposure or exposure pair have an important main effect or interaction surface, respectively. These can be interpreted as indicating if a particular distributed lag function is important or not, since implies that and the entire main effect for exposure is removed from the model. Analogous interpretations can be assigned to the variables for interactions. We assign a diffuse normal prior on , and an inverse gamma prior distribution for . The remaining parameters, given by , , , and are particularly influential for the performance of the approach and we discuss prior elicitation strategies for them in greater detail in the following sections.
2.1 Selection of basis functions
The model parameterizes the distributed lag parameters as a weighted sum of basis functions given by and for the main effect and interaction surfaces, respectively. Our goal is to keep the dimension of the parameter space relatively small, while retaining as much flexibility in the resulting surface effect estimates as possible. Further, we prefer fitting algorithms that are data-driven and do not force the user to select tuning parameters, such as the number of basis functions or the location of knots, both of which can impact the results. One approach that obviates the need for an analyst to choose the number of basis functions is based on principle components. Following the ideas proposed in Wilson and others (2017a), we let represent the by matrix of exposure values measured over time and compute the sample covariance matrix defined by
. The eigenvectors of this matrix can then be used as basis functions to parameterize the distributed lags curves. The choice of the number of basis functions can be automated by retaining only the eigenvectors that explain a pre-specified proportion of the variability in the data, such as 95%. We use the smoothed covariance matrix proposed inXiao and others (2016) and implemented in the R package refund (Goldsmith and others, 2018) to obtain smooth basis functions, which we refer to as FPCA basis functions. Additionally we use a constant as one of the basis functions to allow the mean of the distributed lag curve to differ from zero.
For specification of the two-dimensional functions that model the distributed lag interaction surfaces, power becomes a pressing issue because interactions in environmental studies typically have small to moderate signals. Therefore it is important to also reduce the number of parameters for these surfaces in order to increase power to detect an effect. Toward this goal, we use tensor products of the one-dimensional basis functions from above, which yieldsfunctions defined as for . By assuming a smooth surface for the interaction distributed lag surfaces with this tensor product structure, we reduce the dimension of the parameter space for each interaction surface from parameters to . Depending on the values of and , this can still lead to a large number of parameters representing the distributed lag interaction surfaces. While this allows for flexibility in the modeling of the distributed lag surfaces, the power to detect important exposure effects can still be low. To reduce the parameter space further, we first write the component of stemming from the interaction between exposure and as:
Let be a by matrix whose columns are the eigenvectors of and be a by matrix representing only the first columns of . We model the interaction surfaces using The parameter vector now contains only parameters. If is large enough, and we are able to adequately estimate this interaction, but with much fewer parameters and therefore increased power to detect an effect. The columns of tend to be highly correlated and therefore substantial dimension reduction can be achieved with little loss of information. Note that for well-chosen this can improve power substantially, though it does come with a loss of information. Further, if is chosen to be too small, then too much information will be lost and the power to detect important signals can decrease. We choose such that a large portion of the variability is maintained from the original , such as 99%. This leads to improved power and more efficient estimation without too much loss of information in the interaction surfaces.
2.2 Relationship between prior parameters and power
There is a strong interplay between the parameters of the spike-and-slab prior and the power to detect effects. As is always the case when performing exposure selection, our goal is to maximize power while restricting the numbers of false discoveries. For ease of discussion, we focus on main effect parameters here. The same results hold for the interaction distributed lag surfaces as well and we implement the approach in simulation and data analysis for both main effects and interactions. The two key parameters in the prior distribution that govern power and false discoveries are , and
. To gain insight into posterior inclusion probabilities, we examine the full conditional for, given by
where represents all parameters in the model except for and . Further, and
are the mean and variance of the full conditional distribution forwhen . Here denotes the multivariate normal density with mean and variance evaluated at the vector . Clearly, if the prior inclusion probability increases, then the expression in (8) will increase and the probability of inclusion increases for the main effect of exposure . The numerator density is the prior density of the parameter vector being zero when , while the denominator is the density at the zero vector when using the full conditional (given data) distribution for when . If the data generating mechanism truly contains a main effect, then the zero vector should be unlikely given the data, and the denominator will become small, leading to increased posterior inclusion probabilities. The second parameter, , enters into both the numerator and denominator of this expression. Therefore it is less clear how relates to power, but this can be explored empirically. For a given data generating mechanism, we can calculate the average value of expression (8) for various values of ,
, and signal-to-noise ratio. Here, the signal-to-noise ratio is calculated as, where is the covariance matrix of exposure over time and is a dimensional vector representing the distributed lag curve. Assuming an AR(1) process for the covariance structure with correlation 0.9 between adjacent time points, Figure 1 shows the posterior inclusion probabilities under settings with signal-to-noise ratios of 0 and 0.05. Two things are clear from this illustration: 1) when the signal-to-noise ratio is zero, one can obtain false discoveries when is too large and/or is too small and 2) when the signal-to-noise ratio is nonzero, larger values of and smaller values of lead to increased power. This concept is generally true for the variance parameter . Smaller values of lead to increased power to detect important distributed lags. Further, there is an interplay between these two prior parameters and power, which we exploit in the following section to maximize power while restricting false discoveries using the model.
2.3 Prior elicitation strategy
The model above has a number of hyperparameters. We are most interested in the estimation of and and therefore we focus attention on and , as the previous section highlighted their importance to power and false discovery control. An important point is that and are strictly related to the probability of inclusion, while and have an equally important purpose of shrinking coefficients corresponding to parameters that are nonzero in the model. Because of this fact, we use standard inverse-gamma prior distributions on and and estimate their posterior distributions from the data. To specify values of and , we take advantage of equation (8) in order to optimize power while limiting false discoveries. Equation (8) depends on and , which take the following values as described in Supplementary Materials Section 1:
where we define
and is an by design matrix with each column defined by These depend on , which is fixed, and on , and
which are unknown and vary as we iterate through the Markov Chain Monte Carlo (MCMC) algorithm. Our goal is to keep the marginal inclusion probability,, low when exposure has no association with the outcome. If exposure has no association with the outcome, then should be centered around zero. Of course there is sampling variability around zero and we must understand this variability in order to control at a particular level. Suppose our goal is to ensure that when exposure is not associated with the outcome, where the expectation is taken with respect to the distribution of the data. Given values of and , as well as the distribution of , there is a particular value of that will ensure . Finding this value of analytically is difficult; however, it is straightforward to find this value using a Monte Carlo approximation to this expectation. Since is approximately when exposure is not associated with the outcome, we can simulate what happens to for a given set of values for , and . We draw vectors of length from a normal distribution with mean zero and variance , which we refer to as for . For we draw using the probability in (8) and keep track of . In particular, we find the largest value such that . We find this value for each exposure at each iteration of the MCMC to find the value of that gives the desired posterior inclusion probability. Effectively, we are setting , where the function is a deterministic function that finds the largest value for the current parameter values that gives the desired posterior inclusion probability. We can perform an analogous procedure to find for each possible interaction to control the posterior inclusion probability for interactions. We demonstrate empirically in Section 3 that this controls the marginal posterior inclusion probability at the desired level.
The main reason for setting and in this way is to increase the posterior inclusion probabilities of important exposures and interactions. While this will lead to improved power to detect important exposures, it is conceptually different than a traditional Bayesian analysis. The parameters and represent prior inclusion probabilities for the main effects and interactions, respectively, and typically represent prior beliefs in the probability of important associations. These prior parameters are then either specified a priori
or are assigned hyperprior distributions that acknowledge the uncertainty in their values. We take a different approach and update these parameters throughout the MCMC to be a function of the other parameters in the model,and . We demonstrate in Section 3 that this approach leads to better performance in terms of exposure selection and estimation of distributed lag surfaces.
2.4 Identification of critical windows
Our approach has focused on the identification of exposures and interactions that are associated with the outcome, but a secondary goal that is important in environmental epidemiology is the identification of critical windows of susceptibility. If an exposure or interaction has a critical window, it is more likely to be included in the model due to the hyperparameter selection described in Section 2.3, but we still need an approach to understand which time periods are most associated with the outcome. We use the Bayes- measure to assess times during which there is a critical window. Formally, the Bayes- for exposure at time is defined as Bayes-. A Bayes- near 0 indicates evidence of a critical window at that time point. We use a threshold of Bayes- to identify critical windows, which corresponds to the 95% posterior interval not containing zero.
3 Simulation study
Here we present the results of a simulation study to assess the performance of the proposed multiple-exposure distributed lag model. The goals of the simulation study are to assess 1) the performance of our empirical approach to selecting basis functions on a variety of distributed lag surfaces, 2) the ability of our procedure for updating and to maximize power while controlling posterior inclusion probabilities, and 3) whether our various dimension reduction strategies for interaction surfaces are able to increase power to detect weak to moderate interaction signals. To assess the first of these goals, we explore distributed lag effects that take varying functional forms and evaluate whether our basis functions are able to capture the true relationships between exposures and the outcome. To evaluate the second and third of these goals, we vary the signal-to-noise ratio by increasing the residual variance of the model and assessing the power to detect signals as a function of this residual variance. For all simulations, we vary to evaluate different signal-to-noise ratios. We explore three different variations of our model to estimate the effects of the exposures on the outcome:
Full Bayes (FB): Our proposed model using the FPCA basis functions that retain 95% of the variability in the exposures. This model assumes and with hyperpriors for all parameters including and .
FB reduced: The FB model, but with dimension reduction performed on the interaction surfaces by retaining 99.9% of the variability in the interaction basis functions.
False discovery control (FD Tau): The FB reduced model, but with chosen via the approach in Section 2.3. We target posterior inclusion probabilities of 0.1 and 0.05 for null main effects and interactions, respectively.
We evaluate the performance of these approaches across a range of metrics including power, false discovery inclusion probability, 95% point-wise credible interval coverage, and mean squared error (MSE). Power is defined to be the average posterior inclusion probability across all simulations, averaged over all important effects in the model. We define the false discovery inclusion probability to be the average posterior inclusion probability across all simulations, averaged over all null effects in the model. We define 95% point-wise interval coverage to be the proportion of simulations in which the 95% credible interval of the distributed lag curves and interaction surfaces contain the corresponding true parameters, averaged over all important parameters in the model. Lastly mean squared error is the squared difference between the posterior mean and the true values for each parameter, averaged over all exposures and time points.
We explore one simulation scenario here and include results from additional data generating mechanisms in Section 2 of the Supplementary Materials. Results are fairly similar across different data generating mechanisms. Throughout we set , exposures in the model, and time points. We generate the
exposures from a vector autoregressive model that allows exposures to be correlated both across exposures and within exposures across time. For each subject in the study, we draw their exposure values over time according to the following:
where now denotes the -dimensional vector of exposures at time . We let be a diagonal matrix with on the diagonals to induce temporal correlation within each exposure over time. We explore two values for the correlation among exposures within the mixture () at a given time: independent exposures with for all and correlated exposures with . Exposures 1, 7, and 8 have nonzero main effects and exposure pairs (1,2) and (3,4) have nonzero interaction effects. Figure 2 shows the functional form of the non-zero main and interaction effects.
3.1 Variable selection performance
Figure 3 summarizes the simulation results. When the residual variance is 0.5, nearly all of the proposed approaches perform similarly well in terms of power, as seen in the first row of the figure. This is the setting with the greatest signal-to-noise ratio and therefore the power to detect important main and interaction effects is near 1 for all approaches. As the residual variance increases, there is a quick decay in the power of the FB and FB reduced approaches, while the FD Tau approach maintains comparatively high power. The second row highlights the posterior inclusion probabilities for exposures or interactions that do not affect the outcome. We see these false discovery inclusion probabilities are higher for the FD Tau approach but are always below the desired levels of 0.1 for main effects and 0.05 for interactions. The performance of the approaches in both the independent and correlated exposures setting are similar.
This increased power also leads to increased coverage rates. Posterior inclusion probabilities are low for the true effects with the FB and FB reduced models. This leads to worse performance of the coverage of the main effects because it is less likely that resulting intervals cover the true parameter if it is not included in the model. The coverage is much closer to the nominal 95% level for the main effects under the FD Tau approach. At low signal-to-noise ratio (higher values), the coverage drops slightly below 0.95, although this is due to the decreasing posterior inclusion probability that occurs as the signal-to-noise ratio decreases. The MSE is lowest for the FD Tau approach in all scenarios, particularly for the interactions, which we detail in the following section. Overall, this simulation shows that the proposed approach does a good job approximating the distributed lag surfaces without a priori knowledge about their functional form.
3.2 Estimation of interaction surfaces
All approaches have high power to detect the true interaction surfaces when is small (high signal-to-noise ratio). The power is substantially better for the FD Tau approach for larger values. Unlike with the main effects, the high power does not lead to 95% credible interval coverage due to the additional dimension reduction implemented by all approaches except for the FB approach (see Section 2.1). This dimension reduction is intended to reduce the parameter space of the interaction surfaces and, as a result, leads to increased power and efficiency. We see in the top right panel of Figure 3 that FB reduced has greater power than the FB approach. Figure 4 shows the average of the posterior means of the interaction surface for exposures 3 and 4, for both the FB and FB reduced approaches, when . We see that the FB model without any dimension reduction does a good job approximating the true interaction surface, but the dimension reduction approach incurs some bias in estimating this surface. This bias results from a bias-variance trade-off that leads to small amounts of bias and reduced 95% credible interval coverage, but also improves power and efficiency of the resulting models. This efficiency gain can be seen in the bottom right panel of Figure 3, which shows that the MSE of the FB approach is larger than the FB reduced or FD Tau approaches that employ this dimension reduction.
3.3 Estimation of the cumulative exposure effects
A common summary parameter in distributed lag models is the cumulative effect. The cumulative effect is the expected difference in the outcome that is associated with a simultaneous one unit increase in exposure at each time point. Given that our model also includes interactions, this marginal representation of the cumulative effect requires that we fix the values of the other exposures. Throughout the rest of this work we fix the remaining exposures at their means. By zero-centering all exposures in this simulation study, we parameterize this cumulative effect of exposure as . A cumulative effect for an interaction pair can be similarly defined by fixing the remaining exposures. Throughout, we use the correlated exposures simulation design from above. For comparison, we consider a model that uses average exposure over all weeks. This is a natural alternative to distributed lag models and is often used in practice. The average exposure model is nested in the distributed lag model, as they are equivalent under the constraint that for all time points and . Specifically, the average exposure model assumes that for all . The average exposure model is an interesting comparison model because it is popular in environmental epidemiology, and because it represents an extreme version of our proposed model with the maximum amount of dimension reduction possible. Power is now defined as the probability of significance, where an exposure is deemed significant in our model if the posterior inclusion probability is above 0.5, and an exposure is deemed significant in the averaged exposure model if it’s corresponding p-value is below 0.05.
Figure 5 presents results for the cumulative main effect estimates. We see that the FD Tau approach greatly outperforms the standard approach that assigns equal weight to each time point, as well as the FB and FB reduced approaches that use distributed lag models. The estimates in the top panel of Figure 5 are the estimates of the cumulative effects minus their true values. The estimators from the FD Tau approach are all nearly unbiased, while the estimates from the averaged exposure model shows substantial bias for all estimated exposures with the exception of exposure 8. Exposure 8 was assigned a flat distributed lag surface and therefore an approach that assigns equal weight to each value will perform well. As increases, our approach suffers from increasing variability moreso than the average exposure model, but remains unbiased. The power to detect each exposure can be seen in the lower panel of Figure 5. We see that our approach has substantially higher power for each value of . The only exposure for which which the averaged exposure model has comparable power is exposure 8, which has a constant effect over time. Even for this exposure, the power of the FD Tau approach is similar to the averaged exposure despite estimating time-varying weights, losing little in terms of power.
Figure 6, which shows analogous results for the cumulative interaction effects, depicts a similar story. The FD Tau approach provides estimates that are closer to their corresponding true values than those from the average exposure model, which are severely biased downwards. In addition, the power to detect interactions is highest when using the FD Tau approach in all scenarios.
4 Analysis of Colorado data
We apply our proposed approach in an analysis of birth weight for gestational age -score (BWGAZ) and exposure to a mixture of four air pollutants and temperature. We use birth vital statistics records from Colorado, USA, with estimated conception dates between 2007 and 2015, inclusive. The five exposures of interest are particulate matter smaller than 2.5 microns in diameter (PM), nitrogen dioxide (NO), sulfur dioxide (SO), carbon monoxide (CO), and temperature. For each birth and exposure we construct average values of estimated exposure for each week of gestation at the mother’s residence. We limit the analysis to the Denver metropolitan area for which we have more accurate exposure data. We further restrict our analysis to singleton, full-term births (
weeks) and observations with complete covariate and exposure data, resulting in 195,701 births. In this analysis we control for the following individual covariates: mother’s age, weight, height, income, education, marital status, prenatal care habits, smoking habits, as well as race and Hispanic designations. In addition, we include categorical variables for year and month of conception, census tract elevation, and a county-specific intercept. This study was approved by the Institutional Review Board of Colorado State University.
We fit the proposed model using eight MCMC chains with 10000 iterations each, discarding the first 5000 iterations and thinning every 5th sample. We set the threshold, which is the level at which we aim to control expected posterior inclusion probabilities for exposures that are not associated with the outcome, to be 0.1 for main effects and 0.05 for interactions. As a sensitivity analysis, we consider a range of threshold values for the main effects and interactions. The ten candidate values are equally spaced on the log base ten scale from 0.00001 to 0.25. In the sensitivity analysis, we use the same threshold value for the main effects as the interactions. We also estimate the FB and FB reduced models that do not attempt to maximize power or control the false discovery rate, as described in Section 3.
4.1 Posterior inclusion probabilities
For our primary analysis, Table 1 presents the PIPs. All of the main effect PIPs were well above the threshold, and all were above 0.5, which indicates an association between each exposure and birth weight. The PIPS are 0.99 for temperature, 0.99 for SO, 0.98 for NO, 0.98 for CO and 0.80 for PM. Four interaction surfaces had posterior inclusion probabilities above 0.5. The highest PIP values for interactions were NO-CO with a PIP of 0.93, CO-temperature with a PIP of 0.89, NO-temperature with a PIP of 0.66, and NO-SO with a PIP of 0.60. This provides weak to moderate evidence of an interaction for these pairs of exposures.
|FB||FB reduced||FD tau|
|PM - NO||0.00||0.00||0.00|
|PM - SO||0.00||0.00||0.00|
|NO - SO||0.00||0.00||0.60|
|PM - CO||0.00||0.00||0.00|
|NO - CO||0.00||0.00||0.93|
|SO - CO||0.00||0.00||0.63|
|PM - Temperature||0.00||0.00||0.00|
|NO - Temperature||0.00||0.00||0.66|
|SO - Temperature||0.00||0.00||0.50|
|CO - Temperature||0.00||0.00||0.89|
Table 1 also shows the PIPs from FB and FB reduced. No main effects or interactions were selected by the FB or FB reduced models, which demonstrates the importance of setting hyperparameters to maximize power while controlling the false discovery rate as done by the FD Tau approach. Figure 7 shows the PIPs for the main effects and interactions for the sensitivity analysis with varying thresholds. As expected, when the threshold increases, the PIP for each of the component main effects and interaction surfaces increase as well.
4.2 Estimated distributed lag and cumulative effects
Figure (a)a shows the posterior mean and 95% credible interval for the main effect distributed lag functions. Figure (b)b shows the Bayes- for the association at each time point (Balocchi and others, 2021). The most pronounced association was a negative association between temperature and BWGAZ. The critical window for temperature spans all 37 weeks. The estimated cumulative effect of temperature when all other exposures are fixed an their mean values is -0.19 (95% credible interval: -0.28 to -0.10). The combination of high PIP, clear critical windows, and a large cumulative effect that clearly departs from zero provides strong evidence of an association between temperature and BWGAZ. There was also weak evidence of an association between BWGAZ and three exposures: PM, SO and NO. For each of these pollutants the point estimate of the distributed lag effects was consistently negative and inferences on the cumulative effects were suggestive of an association. The estimated cumulative effect for PM was -0.05 (credible interval: -0.11 to 0.00), for NO was -0.04 (credible interval: -0.08 to 0.00) and for SO was -0.04 (credible interval: -0.07 to -0.01). We did not identify a critical window for any of these components. The combination of high PIPs and suggestive or small significant cumulative effects but no critical windows provides moderately strong evidence of an association. There were two critical windows for CO and BWGAZ. A first trimester window with a negative association and a late trimester window with a positive association. However, the cumulative effect estimate was 0.01 (credible interval: -0.02 to 0.04), which suggests weak evidence of an association, particularly since the range of exposure for CO is small and the exposure is correlated with the other exposures in this data set. There was little evidence of interaction effects based on the estimated interaction surfaces as the posterior means of the interaction surfaces are concentrated around zero with intervals containing 0. For this reason we have not visualized the interaction surfaces.
In this paper we introduced an approach to distributed lag models in the presence of multiple exposures that can identify and estimate the effects of environmental exposures over time while allowing for interactions between two exposures experienced at different points in time. We utilize spike-and-slab prior distributions to perform exposure selection for both the main effect distributed lag functions and two-dimensional interaction surfaces, while utilizing semiparametric functions to allow the effects of exposures to vary smoothly across time. We have shown via simulation that our approach has the ability to identify and estimate the effects of time varying exposures and analyzed the health effects of gestational exposure to air pollution on birth weight.
One issue that makes the identification of important exposures and their interactions difficult is the lack of statistical power to detect small-to-moderate effect sizes, which are common in environmental health research. It is a priority to maximize the power to detect these signals with limited data. We have proposed a novel approach to finding hyperparameters in our model that allows users to maximize power to detect effects while holding constant the posterior inclusion probabilities of exposures having no effect on health. Simulations demonstrated that this approach can greatly increase the posterior inclusion probabilities of important main or interaction effects. Additionally, in our analysis of the Colorado birth cohort data, we found strong evidence of an association between exposure to temperature at all weeks of gestation and decreased BWGAZ. We also saw moderate evidence of an association between the outcome and each of PM, NO, and SO, but little evidence of interactions. This boost in power can be important for future environmental studies, many of which have a limited sample size.
An R package implementing all approaches considered in the manuscript can be found at github.com/jantonelli111/BayesianDLAG
This work was supported by National Institutes of Health grants ES028811, ES000002, and ES030990 and U. S. EPA grant grant RD-835872. These data were supplied by the Center for Health and Environmental Data Vital Statistics Program of the Colorado Department of Public Health and Environment, which specifically disclaims responsibility for any analyses, interpretations, or conclusions it has not provided. This work utilized the RMACC Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder and Colorado State University. This publication’s contents are solely the responsibility of the grantee and do not necessarily represent the official views of the USEPA.
- Balocchi and others (2021) Balocchi, Cecilia, Bai, Ray, Liu, Jessica, Canelón, Silvia P., George, Edward I., Chen, Yong and Boland, Mary R. (2021). A bayesian hierarchical modeling framework for geospatial analysis of adverse pregnancy outcomes.
- Bello and others (2017) Bello, Ghalib A., Arora, Manish, Austin, Christine, Horton, Megan K., Wright, Robert O. and Gennings, Chris. (2017, jul). Extending the Distributed Lag Model framework to handle chemical mixtures. Environmental Research 156(December 2016), 253–264.
- Bose and others (2017) Bose, Sonali, Chiu, Yueh-Hsiu Mathilda, Hsu, Hsiao-Hsien Leon, Di, Qian, Rosa, Maria José, Lee, Alison, Kloog, Itai, Wilson, Ander, Schwartz, Joel, Wright, Robert O., Cohen, Sheldon, Coull, Brent A and others. (2017, dec). Prenatal Nitrate Exposure and Childhood Asthma. Influence of Maternal Prenatal Stress and Fetal Sex. American Journal of Respiratory and Critical Care Medicine 196(11), 1396–1403.
- Carrico and others (2015) Carrico, Caroline, Gennings, Chris, Wheeler, David C. and Factor-Litvak, Pam. (2015, mar). Characterization of Weighted Quantile Sum Regression for Highly Correlated Data in a Risk Analysis Setting. Journal of Agricultural, Biological, and Environmental Statistics 20(1), 100–120.
- Chen and others (2019) Chen, Yin-Hsiu, Mukherjee, Bhramar and Berrocal, Veronica J. (2019). Distributed lag interaction models with two pollutants. Journal of the Royal Statistical Society: Series C (Applied Statistics) 68(1), 79–97.
- Chiu and others (2017) Chiu, Yueh-Hsiu Mathilda, Hsu, Hsiao-Hsien Leon, Wilson, Ander, Coull, Brent A., Pendo, Mathew P., Baccarelli, Andrea, Kloog, Itai, Schwartz, Joel, Wright, Robert O., Taveras, Elsie M. and others. (2017, oct). Prenatal particulate air pollution exposure and body composition in urban preschool children: Examining sensitive windows and sex-specific associations. Environmental Research 158(March), 798–805.
- Darrow and others (2010) Darrow, Lyndsey A, Klein, Mitchel, Strickland, Matthew J, Mulholland, James A and Tolbert, Paige E. (2010). Ambient Air Pollution and Birth Weight in Full-Term Infants in Atlanta, 1994–2004. Environmental Health Perspectives 119(5), 731–737.
- Davalos and others (2017) Davalos, Angel D., Luben, Thomas J., Herring, Amy H. and Sacks, Jason D. (2017, feb). Current approaches used in epidemiologic studies to examine short-term multipollutant air pollution exposures. Annals of Epidemiology 27(2), 145–153.e1.
- Gasparrini (2014) Gasparrini, Antonio. (2014). Modeling exposure–lag–response associations with distributed lag non-linear models. Statistics in medicine 33(5), 881–899.
- Gasparrini and others (2017) Gasparrini, Antonio, Scheipl, Fabian, Armstrong, Ben and Kenward, Michael G. (2017). A penalized framework for distributed lag non-linear models. Biometrics 73(3), 938–948.
- Gibson and others (2019) Gibson, Elizabeth A., Nunez, Yanelli, Abuawad, Ahlam, Zota, Ami R., Renzetti, Stefano, Devick, Katrina L., Gennings, Chris, Goldsmith, Jeff, Coull, Brent A. and Kioumourtzoglou, Marianthi-Anna. (2019, dec). An overview of methods to address distinct research questions on environmental mixtures: an application to persistent organic pollutants and leukocyte telomere length. Environmental Health 18(1), 76.
- Goldsmith and others (2018) Goldsmith, Jeff, Scheipl, Fabian, Huang, Lei, Wrobel, Julia, Gellar, Jonathan, Harezlak, Jaroslaw, McLean, Mathew W., Swihart, Bruce, Xiao, Luo, Crainiceanu, Ciprian and others. (2018). refund: Regression with Functional Data. R package version 0.1-17.
- Lee and others (2018) Lee, Alison, Leon Hsu, Hsiao-Hsien, Mathilda Chiu, Yueh-Hsiu, Bose, Sonali, Rosa, Maria José, Kloog, Itai, Wilson, Ander, Schwartz, Joel, Cohen, Sheldon, Coull, Brent A, Wright, Robert O. and others. (2018, may). Prenatal fine particulate exposure and early childhood asthma: Effect of maternal stress and fetal sex. Journal of Allergy and Clinical Immunology 141(5), 1880–1886.
- Liu and others (2017) Liu, Shelley H, Bobb, Jennifer F, Lee, Kyu Ha, Gennings, Chris, Claus Henn, Birgit, Bellinger, David, Austin, Christine, Schnaas, Lourdes, Tellez-Rojo, Martha M, Hu, Howard and others. (2017). Lagged kernel machine regression for identifying time windows of susceptibility to exposures of complex mixtures. Biostatistics 19(3), 325–341.
Mitchell and Beauchamp (1988)
Mitchell, Toby J and Beauchamp, John J. (1988).
Bayesian variable selection in linear regression.Journal of the american statistical association 83(404), 1023–1032.
- Mork and Wilson (2021a) Mork, Daniel and Wilson, Ander. (2021a). Estimating Perinatal Critical Windows to Environmental Mixtures via Structured Bayesian Regression Tree Pairs. arXiv preprint arXiv:2102.09071, 1–27.
- Mork and Wilson (2021b) Mork, Daniel and Wilson, Ander. (2021b, 10). Treed distributed lag non-linear models. Biostatistics, in press.
- Muggeo (2007) Muggeo, Vito MR. (2007). Bivariate distributed lag models for the analysis of temperature-by-pollutant interaction effect on mortality. Environmetrics 18(3), 231–243.
- Schwartz (2000) Schwartz, Joel. (2000). The distributed lag between air pollution and daily deaths. Epidemiology 11(3), 320–326.
- Taylor and others (2016) Taylor, Kyla W., Joubert, Bonnie R., Braun, Joe M., Dilworth, Caroline, Gennings, Chris, Hauser, Russ, Heindel, Jerry J., Rider, Cynthia V., Webster, Thomas F. and Carlin, Danielle J. (2016, dec). Statistical Approaches for Assessing Health Effects of Environmental Chemical Mixtures in Epidemiology: Lessons from an Innovative Workshop. Environmental Health Perspectives 124(12), 227–229.
- Warren and others (2012) Warren, Joshua, Fuentes, Montserrat, Herring, Amy and Langlois, Peter. (2012, dec). Spatial-Temporal Modeling of the Association between Air Pollution Exposure and Preterm Birth: Identifying Critical Windows of Exposure. Biometrics 68(4), 1157–1167.
- Warren and others (2021) Warren, Joshua L., Chang, Howard H., Warren, Lauren K., Strickland, Matthew J., Darrow, Lyndsey A. and Mulholland, James A. (2021). Critical window variable selection for mixtures: Estimating the impact of multiple air pollutants on stillbirth.
- Warren and others (2020a) Warren, Joshua L, Kong, Wenjing, Luben, Thomas J and Chang, Howard H. (2020a, oct). Critical window variable selection: estimating the impact of air pollution on very preterm birth. Biostatistics 21(4), 790–806.
- Warren and others (2020b) Warren, Joshua L, Luben, Thomas J and Chang, Howard H. (2020b). A spatially varying distributed lag model with application to an air pollution and term low birth weight study. Journal of the Royal Statistical Society: Series C (Applied Statistics) 69(3), 681–696.
- Wilson and others (2017a) Wilson, Ander, Chiu, Yueh-Hsiu Mathilda, Hsu, Hsiao-Hsien Leon, Wright, Robert O, Wright, Rosalind J and Coull, Brent A. (2017a). Bayesian distributed lag interaction models to identify perinatal windows of vulnerability in children’s health. Biostatistics 18(3), 537–552.
- Wilson and others (2017b) Wilson, Ander, Chiu, Yueh-Hsiu Mathilda, Hsu, Hsiao-Hsien Leon, Wright, Robert O, Wright, Rosalind J and Coull, Brent A. (2017b, dec). Potential for Bias When Estimating Critical Windows for Air Pollution in Children’s Health. American Journal of Epidemiology 186(11), 1281–1289.
- Wilson and others (2020) Wilson, Ander, Hsu, Hsiao-Hsien Leon, Chiu, Yueh-Hsiu Mathilda, Wright, Robert O., Wright, Rosalind J. and Coull, Brent A. (2020). Kernel machine and distributed lag models for assessing windows of susceptibility to environmental mixtures in children’s health studies.
- Wright (2017) Wright, Robert O. (2017, apr). Environment, susceptibility windows, development, and child health. Current Opinion in Pediatrics 29(2), 211–217.
- Xiao and others (2016) Xiao, Luo, Zipunnikov, Vadim, Ruppert, David and Crainiceanu, Ciprian. (2016). Fast covariance estimation for high-dimensional functional data. Statistics and computing 26(1-2), 409–421.
- Zanobetti and others (2000) Zanobetti, A, Wand, MP, Schwartz, J and Ryan, LM. (2000). Generalized additive distributed lag models: quantifying mortality displacement. Biostatistics 1(3), 279–292.
Appendix A Sampling considerations
Sampling from the distributed lag model is straightforward via Gibbs sampling since all of the full conditionals have a recognizable form. Sampling proceeds from the following steps:
When using the FB or FB reduced approaches, update from the following conditional:
When using the FB or FB reduced approaches, update from the following conditional:
For update . If we let represent all of the parameters in our model excluding , then our interest is in updating from . We can calculate this quantity as follows:
Using similar techniques it is easy to see that , and therefore updating
just involves normalizing these probabilities and sampling from a bernoulli distribution. In this final expression,and are defined as follows:
where we define
and is an by design matrix with each column defined by
For update . If , then , otherwise update from a multivariate normal distribution with mean and variance .
For all combinations of and update . If we let represent all of the parameters in our model excluding , then we can perform a similar sampling technique as for by calculating
Similarly to before, and are defined as follows:
Where we define
and is an by design matrix with elements defined by
and each pair corresponds to one of the columns of the design matrix.
For each combination of and , set if , otherwise update it from a multivariate normal distribution with mean and variance
Update from the following full conditional:
Appendix B Additional simulation studies
Here we run additional simulation studies with different data generating mechanisms from the main manuscript. Specifically we look at simulations that involve different distributed lag curves, and simulations where there are no exposure effects.
b.1 No exposure effects
In this simulation study, we again let and we generate our exposures from a multivariate normal distribution as described in the manuscript. We now fix and simulate data in which there is no association between the exposures and outcomes. For this reason, we focus attention on results involving the posterior inclusion probabilities for the exposures and interactions. Specifically, our approach is designed to limit these to 0.1 and 0.05 for main effects and interactions, respectively. The results can be found in Figure 9, and we see that the FD Tau approach is doing as expected by obtaining posterior inclusion probabilities near 0.1 for main effects and 0.05 for interactions. The FB and FB reduced approaches have smaller posterior inclusion probabilities that are very close to zero in this setting, similar to what was seen in the simulations of the manuscript.
b.2 Main effects only
Now we simulate data with main effects only and no interactions between the exposures. We again draw the exposures from the same multivariate distribution used in the manuscript and we set in this setting. The two exposures with important main effects are exposures 1 and 7, and their distributed lag curves can be seen in Figure 10.
The results from this simulation study can be found in Figures 11 and 12. Overall the results mirror those seen in the main manuscript. The basis functions again perform well as the interval coverage for the important main effects are close to the nominal 95% level. The FD Tau approach again has increased power relative to the FB and FB reduced approaches. The FD Tau approach has posterior inclusion probabilities for the null main and interaction effects that is roughly equal to the 0.1 and 0.05 levels that it is targeting. Figure 12 shows that the averaged exposure approach leads to biased estimates of the overall exposure effects, while the FD Tau approach leads to estimates with low amounts of bias and increased power relative to the FB, FB reduced, and averaged exposure approaches.