Three principles of data science: predictability, computability, and stability (PCS)

01/23/2019 ∙ by Bin Yu, et al. ∙ 12

We propose the predictability, computability, and stability (PCS) framework to extract reproducible knowledge from data that can guide scientific hypothesis generation and experimental design. The PCS framework builds on key ideas in machine learning, using predictability as a reality check and evaluating computational considerations in data collection, data storage, and algorithm design. It augments PC with an overarching stability principle, which largely expands traditional statistical uncertainty considerations. In particular, stability assesses how results vary with respect to choices (or perturbations) made across the data science life cycle, including problem formulation, pre-processing, modeling (data and algorithm perturbations), and exploratory data analysis (EDA) before and after modeling. Furthermore, we develop PCS inference to investigate the stability of data results and identify when models are consistent with relatively simple phenomena. We compare PCS inference with existing methods, such as selective inference, in high-dimensional sparse linear model simulations to demonstrate that our methods consistently outperform others in terms of ROC curves over a wide range of simulation settings. Finally, we propose a PCS documentation based on Rmarkdown, iPython, or Jupyter Notebook, with publicly available, reproducible codes and narratives to back up human choices made throughout an analysis. The PCS workflow and documentation are demonstrated in a genomics case study available on Zenodo.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 6

page 7

page 8

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Data science is a field of evidence seeking that combines data with prior information to generate new knowledge. The data science life cycle begins with a domain question or problem and proceeds through collecting, managing, processing/cleaning, exploring, modeling, and interpreting data to guide new actions (Fig. 1). Given the trans-disciplinary nature of this process, data science requires human involvement from those who understand both the data domain and the tools used to collect, process, and model data. These individuals make implicit and explicit judgment calls throughout the data science life cycle. In order to transparently communicate and evaluate empirical evidence and human judgment calls, data science requires an enriched technical language. Three core principles: predictability, computability, and stability provide the foundation for such a data-driven language, and serve as minimum requirements for extracting reliable and reproducible knowledge from data.

These core principles have been widely acknowledged in various areas of data science. Predictability plays a central role in science through the notion of Popperian falsifiability (1). It has been adopted by the statistical and machine learning communities as a goal of its own right and more generally to evaluate the reliability of a model or data result (2). While statistics has always included prediction as a topic, machine learning emphasized its importance. This was in large part powered by computational advances that made it possible to compare models through cross-validation (CV), a technique pioneered by statisticians Stone and Allen (3, 4). CV effectively generates pseudo-replicates from a single data set. This incorporates another important scientific principle: replication, and requires an understanding of the data generating process to justify the validity of CV pseudo replicates.

The role of computation extends beyond prediction, setting limitations on how data can be collected, stored, and analyzed. Computability has played an integral role in computer science tracing back to Alan Turing’s seminal work on the computability of sequences (5). Analyses of computational complexity have since been used to evaluate the tractability of statistical machine learning algorithms (6). Kolmogorov built on Turing’s work through the notion of Kolmogorov complexity, which describes the minimum computational resources required to represent an object (7, 8)

. Since Turing machine based computabiltiy notions are not computable in practice, in this paper, we treat computability as an issue of efficiency and scalability of optimization algorithms.

Stability is a common sense principle and a prequisite for knowledge, as articulated thousand years ago by Plato in Meno: “That is why knowledge is prized higher than correct opinion, and knowledge differs from correct opinion in being tied down.” In the context of the data science life cycle, stability111We differentiate between the notions of stability and robustness as used in statistics. The latter has been traditionally used to investigate performance of statistical methods across a range of distributions, while the former captures a much broader range of perturbations throughout the data science life cycle as discussed in this paper. But at a high level, stability is about robustness. has been advocated in (9) as a minimum requirement for reproducibility and interpretability at the modeling stage. To investigate the reproducibility of data results, modeling stage stability unifies numerous previous works including Jackknife, subsampling, bootstrap sampling, robust statistics, semi-parametric statistics, Bayesian sensitivity analysis (see (9) and references therein), which have been enabled in practice through computational advances. Econometric models with partial identification can also be viewed as a form of model stability consideration (see the book (10) and references therein). More broadly, stability is related to the notion of scientific reproducibility, which Fisher and Popper argued is a necessary condition for establishing scientific results(1, 11). While reproducibility of results across laboratories has long been an important consideration in science, computational reproducibility has come to play an important role in data science as well. For example, (12) discuss reproducible research in the context of computational harmonic analysis. More broadly, (13) advocates for “preproducibility” to explicitly detail all steps along the data science life cycle and ensure sufficient information for quality control.

Here, we unify and expand on these ideas through the PCS framework. At the conceptual level, the PCS workflow uses predictability as a reality check, computability to ensure that an analysis is tractable, and stability to test the reproducibility of a data result against perturbations throughout the entire data science life cycle. It provides a general framework for evaluating and documenting analyses from problem formulation and data collection to conclusions, including all human judgment calls. The limited acknowledgement of human judgment calls in the data science life cycle, and therefore the limited transparency in reporting human decisions, has blurred the evidence for many data science analyses and resulted more false-discoveries than might otherwise occur. Our proposed PCS framework intends to mitigate these problems by clearly organizing and communicating human judgment calls so that they may be more easily deliberated and evaluated by others. It serves as a beneficial platform for reproducible data science research, providing a narrative (for example, to justify a bootstrap scheme is “appropriate") in terms of both language and technical contents in the form of RmarkDown or iPython (Jupyter) Notebook.

The rest of the paper is organized as follows. In section 2 we introduce the PCS framework, which integrates the principles of predictability, computability, and stability across the entire data science life cycle. In section 4 we draw connections between the PCS framework and traditional statistical inference and propose a PCS inference framework for transparent instability or perturbation assessment in data science. In section 6 we propose a format for documenting the PCS workflow with narratives and codes to justify the human judgment calls made during the data science life cycle. A case study of our proposed PCS workflow is based on the authors’ work studying gene regulation in Drosophila and available on Zenodo. We conclude by discussing areas for further work, including additional vetting of the workflow and theoretical analysis on the connections between the three principles.

2 The PCS framework

Given a domain problem and data, the data science life cycle generates conclusions and/or actions (Fig. 1). The PCS workflow and documentation aim at ensuring the reliability and quality of this process through the three fundamental principles of data science. Predictability serves as a default reality check, though we note that other metrics, such as experimental validation or domain knowledge, may supplement predictability. Computability ensures that data results are tractable relative to a computing platform and available resources, including storage space, CPU/GPU time, memory, and communication bandwidth. Stability assesses whether results are robust to “appropriate” perturbations of choices made at each step throughout the data science life cycle. These considerations serve as minimum requirements for any data-driven conclusion or action. We formulate the PCS framework more precisely over the next four sections and address PCS documentation in section 6.

Figure 1: The data science life cycle

2.1 Stability assumptions initiate the data science life cycle

The ultimate goal of the data science life cycle is to generate knowledge that is useful for future actions, be it a section in a textbook, biological experiment, business decision, or governmental policy. Stability is a useful concept to address whether an alternative, “appropriate” analysis would generate similar knowledge. At the modeling stage, stability has previously been advocated in (9). In this context, stability refers to acceptable consistency of a data result relative to “reasonable” perturbations of the data or model. For example, jackknife (14, 15, 16), bootstrap (17), and cross validation (3, 4) may be considered reasonable or appropriate perturbations if the data are deemed approximately independent and identically distributed (i.i.d.) based on prior knowledge and an understanding of the data collection process. In addition to modeling stage perturbations, human judgment calls prior to modeling also impact data results. The validity of such decisions relies on implicit stability assumptions that allow data scientists to view their data as an accurate representation of the natural phenomena they originally measured.

Question or problem formulation: The data science life cycle begins with a domain problem or a question. For instance, a biologist may want to discover regulatory factors that control genes associated with a particular disease. Formulating the domain question corresponds to an implicit linguistic stability, or well-definedness, assumption that both domain experts and data scientists understand the problem in the same manner. That is, the stability of meaning for a word, phrase, or sentence across different communicators is a minimum requirement for problem formulation. Since any domain problem must be formulated in terms of a natural language, linguistic stability is implicitly assumed. We note that there are often multiple translations of a domain problem into a data science problem. Stability of data results across these translations is an important consideration.

Data collection: To answer a domain question, data scientists and domain experts collect data based on prior knowledge and available resources. When this data is used to guide future decisions, researchers implicitly assume that the data is relevant for a future time and under future conditions. In other words, that conditions affecting data collection are stable, at least relative to some aspects of the data. For instance, a biologist could measure DNA binding of regulatory factors across the genome. To identify generalizable regulatory associations, experimental protocols must be comparable across laboratories. These stability considerations are closely related to external validity in medical research regarding the similarities between subjects in a study and subjects that researchers hope to generalize results to. We will discuss this idea more in section 2.2.

Data cleaning and preprocessing: Statistical machine learning models or algorithms help data scientists answer domain questions. In order to use these tools, a domain question must first be translated into a question regarding the outputs of a model or an algorithm. This translation step includes cleaning and/or processing raw data into a suitable format, be it a categorical demographic feature or continuous measurements of biomarker concentrations. For instance, when multiple labs are used to produce data, the biologist must decide how to normalize individual measurements (for example see (18)). When data scientists clean or preprocess data, they are implicitly assuming that the raw and processed data contain consistent information about the underlying natural phenomena. In other words, they assume that the knowledge derived from a data result is stable with respect to their processing choices. If such an assumption can not be justified, they should use multiple reasonable processing methods and interpret only the stable data results across these methods. Others have advocated evaluating results across alternatively processed datasets under the name “multiverse analysis” (19). Although the stability principle was developed independently of this work, it naturally leads to a multiverse-style analysis.

Exploratory data analysis (EDA): Both before and after modeling, data scientists often engage in exploratory data analysis (EDA) to identify interesting relationships in the data and interpret data results. When visualizations or summaries are used to communicate these analyses, it is implicitly assumed that the relationships or data results are stable with respect to any decisions made by the data scientist. For example, if the biologist believes that clusters in a heatmap represent biologically meaningful groups, they should expect to observe the same clusters with any appropriate choice of distance metric and with respect to appropriate data perturbations and appropriate clustering methods.

In each of these steps, stability assumptions allow data to be translated into reliable results. When these assumptions do not hold, results or conclusions apply only to the specific data from which they were drawn and with the specific data cleaning and EDA methods used. The justification is suspicious for using data results that do not generalize beyond a specific dataset to guide future actions. This makes it essential to ensure enough stability to guard against unnecessary costly future actions and false discoveries, particularly in the domains of science, business, and public policy, where data results are often used to guide large scale actions, and in medicine where people’s lives are at stake.

2.2 Predictability as reality check

After data collection, cleaning, preprocessing, and EDA, models or algorithms222The model or algorithm choices could correspond to different translations of a domain problem. are frequently used to identify more complex relationships in data. Many essential components of the modeling stage rely on the language of mathematics, both in technical papers and in codes on computers. A seemingly obvious but often ignored question is why conclusions presented in the language of mathematics depict reality that exists independently in nature, and to what extent we should trust mathematical conclusions to impact this external reality.333The PCS documentation in section 6 helps users assess reliability of this connection. This concern has been articulated by others, including David Freedman: “If the assumptions of a model are not derived from theory, and if predictions are not tested against reality, then deductions from the model must be quite shaky (20),” and more recently Leo Breiman, who championed the essential role of prediction in developing realistic models that yield sound scientific conclusions (2).

2.2.1 Formulating prediction

We describe a general framework for prediction with data , where represents input features and the prediction target. Prediction targets

may be observed responses (e.g. supervised learning) or extracted from data (e.g. unsupervised learning). Predictive accuracy provides a simple, quantitative metric to evaluate how well a model represents relationships in

. It is well-defined relative to a prediction function, testing data, and an evaluation function. We detail each of these elements below.

Prediction function: The prediction function

(1)

represents relationships between the observed features and the prediction target. For instance, in the case of supervised learning

may be a linear predictor or decision tree. In this setting,

is typically an observed response, such as a class label. In the case of unsupervised learning, could map from input features to cluster centroids.

To compare multiple prediction functions, we consider

(2)

where denotes a collection models/algorithms. For example, may define different tuning parameters in lasso (21)

or random forests

(22)

. For algorithms with a randomized component, such as k-means or stochastic gradient descent,

can represent repeated runs. More broadly,

may describe different architecures for deep neural networks or a set of competing algorithms such as linear models, random forests, and neural networks. We discuss model perturbations in more detail in section

2.4.3.

Testing (held-out) data: We distinguish between training data that are used to fit a collection of prediction functions, and testing data that are held out to evaluate the accuracy of fitted prediction functions. Testing data are typically assumed to be generated by the same process as the training data. Internal testing data, which are collected at the same time and under the same conditions as the training data, clearly satisfy this assumption. To evaluate how a model performs in new settings, we also consider external testing gathered under different conditions from the training data. Prediction accuracy on external testing data directly addresses questions of external validity, which describe how well a result will generalize to future observations. We note that domain knowledge from humans who are involved in data generation and analysis is essential to assess the appropriateness of different prediction settings.

Prediction evaluation metric:

The prediction evaluation metric

(3)

quantifies the accuracy of a prediction function by measuring the similarity between and in the testing data. We adopt the convention that implies perfect prediction accuracy while increasing values imply worse predictive accuracy. When the goal of an analysis is prediction, testing data should only be used in reporting the accuracy of a prediction function . When the goal of an analysis extends beyond prediction, testing data may be used to filter models from equation (2) based on their predictive accuracy.

Despite its quantitative nature, prediction always requires human input to formulate, including the choice of prediction and evaluation functions, the preferred structure of a model/algorithm and what it means by achieving predictability. For example, a biologist studying gene regulation may believe that the simple rules learned by decision trees offer an an appealing representation of interactions that exhibit thresholding behavior (23). If the biologist is interested in a particular cell-type or developmental stage, she might evaluate prediction accuracy on internal test data measuring only these environments. If her responses are binary with a large proportion of class- responses, she may choose an evaluation function to handle the class imbalance.

All of these decisions and justifications should be documented by researchers and assessed by independent reviewers (see the documentation section) so that others can evaluate the strength of the conclusions based on transparent evidence. The accompanying PCS documentation provides a detailed example of our predictive formulation in practice.

2.2.2 Cross validation

As alluded to earlier, cross-validation (CV) has become a powerful work horse to select regularization parameters by estimating the prediction error through multiple pseudo held-out data within a given data set

(3, 4). As a regularization parameter selection tool, CV works more widely than as a prediction error estimate, which can incur high variability due to the often positive dependences between the estimated prediction errors in the summation of the CV error.

CV divides data into blocks of observations, trains a model on all but one block, and evaluates the prediction error over each held-out block. That is, CV evaluates whether a model accurately predicts the responses of pseudo-replicates. Just as peer reviewers make judgment calls on whether a lab’s experimental conditions are suitable to replicate scientific results, data scientists must determine whether a removed block represents a justifiable pseudo replicate of the data, which requires information from the data collection process and domain knowledge.

2.3 Computability

In a broad sense, computability is the gate-keeper of data science. If data cannot be generated, stored and managed, and analyses cannot be computed efficiently and scalably, there is no data science. Modern science relies heavily on information technology as part of the data science life cycle. Each step, from raw data collection and cleaning, to model building and evaluation, rely on computing technology and fall under computability broadly. In a narrow sense, computability refers to the computational feasibility of algorithms or model building.

Here we use the narrow-sense computability that is closely associated with the rise of machine learning over the last three decades. Just as available scientific instruments and technologies determine what processes can be effectively measured, computing resources and technologies determine the types of analyses that can be carried out. Moreover, computational constraints can serve as a device for regularization. For example, stochastic gradient descent has been an effective algorithm for a huge class of machine learning problems (24). Both the stochasticity and early stopping of a stochastic gradient algorithm play the role of implicit regularization.

Increases in computing power also provide an unprecedented opportunity to enhance analytical insights into complex natural phenomena. In particular, we can now store and process massive datasets and use these data to simulate large scale processes about nature and people. These simulations make the data science life cycle more transparent for peers and users to review, aiding in the objectivity of science.

Computational considerations and algorithmic analyses have long been an important part of machine learning. These analyses consider the number of computing operations and required storage space in terms of the number of observations , number of features , and tuning (hyper) parameters. When the computational cost of addressing a domain problem or question exceeds available computational resources, a result is not computable. For instance, the biologist interested in regulatory interactions may want to model interaction effects in a supervised learning setting. However, there are possible order- interactions among regulatory factors. For even a moderate number of factors, exhaustively searching for high-order interactions is not computable. In such settings, data scientists must restrict modeling decisions to draw conclusions. Thus it is important to document why certain restrictions were deemed appropriate and the impact they may have on conclusions (see section 6).

2.4 Stability after data collection

Computational advances have fueled our ability to analyze the stability of data results in practice. At the modeling stage, stability measures how a data result changes as the data and/or model are perturbed. Stability extends the concept of uncertainty in statistics, which is a measure of instability relative to other data that could be generated from the same distribution. Statistical uncertainty assessments implicitly assume stability in the form of a distribution that generated the data. This assumption highlights the importance of other data sets that could be observed under similar conditions (e.g. by another person in the lab or another lab at a later time).

The concept of a true distribution in traditional statistics is a construct. When randomization is explicitly carried out, the true distribution construct is physical. Otherwise, it is a mental construct whose validity must be justified based on domain knowledge and an understanding of the data generating process. Statistical inference procedures or uncertainty assessments use distributions to draw conclusions about the real world. The relevance of such conclusions is determined squarely by the empirical support for the true distribution, especially when the construct is not physical. In data science and statistical problems, practitioners often do not make much of an attempt to justify or provide support for this mental construct. At the same time, they take the uncertainty conclusions very seriously. This flawed practice is likely related to the high rate of false discoveries in science (25, 26). It is a major impediment to true progress of science and to data-driven knowledge extraction in general.

While the stability principle encapsulates uncertainty quantification (when the true distribution construct is well supported), it is intended to cover a much broader range of perturbations, such as pre-processing, EDA, randomized algorithms, data perturbation, and choice of models/algorithms. A complete consideration of stability across the entire data science life cycle is necessary to ensure the quality and reliability of data results. For example, the biologist studying gene regulation must choose both how to normalize raw data and what algorithm(s) she will use. When there is no principled approach to make these decisions, the knowledge data scientists can extract from analyses is limited to conclusions that are stable across reasonable choices (27, 28). This ensures that another scientist studying the same data will come to similar conclusions, despite slight variation in their independent choices.

2.4.1 Formulating stability at the modeling stage

Stability at the modeling stage is defined with respect to a target of interest, a “reasonable” or “appropriate” perturbation to the data and/or algorithm/model, and a stability metric to measure the change in target that results from perturbation. We describe each of these in detail below.

Stability target: The stability target

(4)

corresponds to the data result of interest. It depends on input data and a specific model/algorithm used to analyze the data. For simplicity, we will sometimes supress the dependence on and in our notation. As an example, can represent responses predicted by when the goal of an analysis is prediction. Other examples of

include features selected by lasso with penalty parameter

or saliency maps derived from a convolutional neural network with architecture

.

Data and model perturbations: To evaluate the stability of a data result, we measure the change in target that results from a perturbation to the input data or learning algorithm. More precisely, we define a collection of data perturbations and model/algorithm perturbations and compute the stability target distribution

(5)

The appropriateness of a particular perturbation is an important consideration that should be clearly communicated by data scientists. For example, if observations are approximately i.i.d., bootstrap sampling may be used as a reasonable data perturbation. When different prediction functions are deemed equally appropriate based on domain knowledge, each may be considered as a model perturbation. The case for a “reasonable” perturbation, such as evidence establishing an approximate i.i.d. data generating process, should be documented in a narrative in the publication and in the PCS documentation. We discuss these concerns in greater detail in the next two sections.

Stability evaluation metric: The stability evaluation metric summarizes the stability target distribution in expression (5). For instance, if indicates features selected by a model trained on data , we may report the proportion of data perturbations that recover each feature for each model . When a stability analysis reveals that the is unstable (relative to a threshold meaningful in a domain context), it is advisable to search for another target of interest that achieves the desired level of stability. This creates the possibility of “data-snooping" or overfitting and hence should be viewed as part of the iterative process between data analysis and knowledge generation described by (29). Before defining a new target, it may be necessary to collect new data to avoid overfitting.

2.4.2 Data perturbation

The goal of data perturbation under the stability principle is to mimic a pipeline that could have been used to produce final input data but was not. This includes human decisions, such as preprocessing and data cleaning, as well as data generating mechanisms. When we focus on the change in target under possible realizations of the data from a well-supported probabilistic model, we arrive at well-justified sampling variability considerations in statistics. Hence data perturbation under the stability principle includes, but is much broader than, the concept of sampling variability. It formally recognizes many other important steps and considerations in the data science life cycle beyond sampling variability. Furthermore, it provides a framework to build confidence measures for estimates of when a probabilistic model is not well justified and hence sampling interpretations are not applicable.

Data perturbation can also be employed to reduce variability in the estimated target, which corresponds to a data result of interest. Random Forests incorporate subsampling data perturbations (of both the data units and predictors) to produce predictions with better generalization error (22). Moreover, data perturbation covers perturbations from synthetic data. For example, generative adversarial networks (GANs) use adversarial examples to re-train deep neural networks to produce predictions that are more robust to such adversarial data points (30)

. Bayesian models based on conjugate priors lead to marginal distributions that can be derived by adding observations to the original data, thus such Bayesian models can be viewed as a form of synthetic data perturbation. Empirically supported generative models, including PDE models, can also be used to produce “good” data points or synthetic data that encourage stability of data results with respect to mechanistic rules based on domain knowledge (see section

2.5 on generative models).

2.4.3 Algorithm or model perturbation

The goal of algorithm or model perturbation is to understand how alternative analyses of the same data affect the target estimate. A classical example of model perturbation is from robust statistics, where one searches for a robust estimator of the mean of a location family by considering alternative models with heavier tails than the Gaussian model. Another example of model perturbation is sensitivity analysis in Bayesian modeling (31, 32). Many of the model conditions used in causal inference are in fact stability concepts that assume away confounding factors by asserting that different conditional distributions are the same (33, 34).

Modern algorithms often have a random component, such as random projections or random initial values in gradient descent and stochastic gradient descent. These random components provide natural model perturbations that can be used to assess variability or instability of . In addition to the random components of a single algorithm, multiple models/algorithms can be used to evaluate stability of the target. This is useful when there are many appropriate choices of model/algorithm and no established criteria or established domain knowledge to select among them. The stability principle calls for interpreting only the targets of interest that are stable across these choices of algorithms or models (27).

As with data perturbations, model perturbations can help reduce variability or instability in the target. For instance, (35) selects lasso coefficients that are stable across different regularization parameters. Dropout in neural networks is a form of algorithm perturbation that leverages stability to reduce overfitting (36). Our previous work (28) stabilizes Random Forests to interpret the decision rules in an ensemble of decision trees, which are perturbed using random feature selection.

2.5 Dual roles of generative models in PCS

Generative models include both probabilistic models and partial differential equations (PDEs) with initial or boundary conditions (if the boundary conditions come from observed data, then such PDEs become stochastic as well). These models play dual roles with respect to PCS. On one hand, they can concisely summarize past data and prior knowledge. On the other hand, they can be used as a form of data perturbation or augmentation. The dual roles of generative models are natural ingredients of the iterative nature of the information extraction process from data or statistical investigation as George Box eloquently wrote a few decades ago

(29).

When a generative model is used as a summary of past data, a common target of interest is the model’s parameters, which may be used for prediction or to advance domain understanding. Generative models with known parameters correspond to infinite data, though finite under practical computational constraints. Generative models with unknown parameters can be used to motivate surrogate loss functions for optimization through maximum likelihood and Bayesian modeling methods. In such situations, the generative model’s mechanistic meanings should not be used to draw scientific conclusions because they are simply useful starting points for algorithms whose outputs need to be subjected to empirical validation.

When generative models approximate the data generating process (a human judgment call), they can be used as a form of data perturbation or augmentation, serving the purpose of domain-inspired regularization. That is, synthetically generated data enter as prior information either in the structural form of a probabilistic model (e.g. a parametric class or hierarchical Bayesian model) or PDE (e.g. expressing physical laws). The amount of model-generated synthetic data to combine with the observed data reflects our degree of belief in the models. Using synthetic data for domain inspired regularization allows the same algorithmic and computing platforms to be applied to the combined data. This provides a modular approach to analysis, where generative models and algorithms can be handled by different teams of people. This is also reminiscent of AdaBoost and its variants, which use the current data and model to modify the data used in the next iteration without changing the base-learner (37).

3 Connections in the PCS framework

Although we have discussed the components of the PCS framework individually, they share important connections. Computational considerations can limit the predictive models/algorithms that are tractable, particularly for large, high-dimensional datasets. These computability issues are often addressed in practice through scalable optimization methods such as gradient descent (GD) or stochastic gradient descent (SGD). Evaluating predictability on held-out data is a form of stability analysis where the training/test sample split represents a data perturbation. Other perturbations used to assess stability require multiple runs of similar analyses. Parallel computation is well suited for modeling stage PCS perturbations. Stability analyses beyond the modeling stage requires a streamlined computational platform that is future work.

4 PCS inference through perturbation analysis

When data results are used to guide future decisions or actions, it is important to assess the quality of the target estimate. For instance, suppose a model predicts that an investment will produce a return over one year. Intuitively, this prediction suggests that “similar” investments return on average. Whether or not a particular investment will realize a return close to depends on whether returns for “similar” investments ranged from to or from to . In other words, the variability or instability of a prediction conveys important information about how much one should trust it.

In traditional statistics, confidence measures describe the uncertainty or instability of an estimate due to sample variability under a well justified probabilistic model. In the PCS framework, we define quality or trustworthiness measures based on how perturbations affect the target estimate. We would ideally like to consider all forms of perturbation throughout the data science life cycle, including different translations of a domain problem into data science problems. As a starting point, we focus on a basic form of PCS inference that generalizes traditional statistical inference. The proposed PCS inference includes a wide range of “appropriate” data and algorithm/model perturbations to encompasses the broad range of settings encountered in modern practice.

4.1 PCS perturbation intervals

The reliability of PCS quality measures lies squarely on the appropriateness of each perturbation. Consequently, perturbation choices should be seriously deliberated, clearly communicated, and evaluated by objective reviewers as alluded to earlier. To obtain quality measures for a target estimate444A domain problem can be translated into multiple data science problems with different target estimates. For the sake of simplicity, we discuss only one translation in the basic PCS inference framework., which we call PCS perturbation intervals, we propose the following steps555The PCS perturbation intervals cover different problem translations through and are clearly extendable to include perturbations in the pre-processing step through .:

  1. Problem formulation: Translate the domain question into a data science problem that specifies how the question can be addressed with available data. Define a prediction target , “appropriate” data and/or model perturbations, prediction function(s) {, training/test split, prediction evaluation metric , stability metric , and stability target , which corresponds to a comparable quantity of interest as the data and model/algorithm vary. Document why these choices are appropriate in the context of the domain question.

  2. Prediction screening: For a pre-specified threshold , screen out models that do not fit the data (as measured by prediction accuracy)

    (6)

    Examples of appropriate threshold include domain accepted baselines, the top performing models, or models whose accuracy is suitably similar to the most accurate model. If the goal of an analysis is prediction, the testing data should be held-out until reporting the final prediction accuracy. In this setting, equation (6) can be evaluated using a training or surrogate sample-splitting approach such as cross validation. If the goal of an analysis extends beyond prediction, equation (6) may be evaluated on held-out test data.

  3. Target value perturbation distributions: For each of the survived models from step 2, compute the stability target under each data perturbation

    . This results in a joint distribution of the target over data and model perturbations as in equation (

    5).

  4. Perturbation interval (or region) reporting: Summarize the target value perturbation distribution using the stability metric . For instance, this summary could be the 10th and 90th percentiles of the target estimates across each perturbation or a visualization of the entire target value perturbation distribution.

When only one prediction or estimation method is used, the probabilistic model is well supported by empirical evidence (past or current), and the data perturbation methods are bootstrap or subsampling, the PCS stability intervals specialize to traditional confidence intervals based on bootstrap or subsampling.

4.2 PCS hypothesis testing

Hypothesis testing from traditional statistics assumes empirical validity of probabilistic data generation models and is commonly used in decision making for science and business alike.666Under conditions, Freedman (38) showed that some tests can be approximated by permutation tests when data are not generated from a probabilistic model. But these results are not broadly available. The heart of Fisherian testing (39)

lies in calculating the p-value, which represents the probability of an event more extreme than in the observed data under a null hypothesis or distribution. Smaller p-values correspond to stronger evidence against the null hypothesis, or the scientific theory embedded in the null hypothesis. For example, we may want to determine whether a particular gene is differentially expressed between breast cancer patients and a control group. Given random samples from each population, we could address this question in the classical hypothesis testing framework using a t-test. The resulting p-value describes the probability of seeing a difference in means more extreme than observed if the genes were not differentially expressed.

While hypothesis testing is valid philosophically, many issues that arise in practice are often left unaddressed. For instance, when randomization is not carried out explicitly, justification for a particular null distribution comes from domain knowledge of the data generating mechanism. Limited consideration of why data should follow a particular distribution results in settings where the null distribution is far from the observed data. In this situation, there are rarely enough data to reliably calculate p-values that are now often as small as or , as in genomics, especially when multiple hypotheses (e.g. thousands of genes) are considered. When results are so far off on the tail of the null distribution, there is no empirical evidence as to why the tail should follow a particular parametric distribution. Moreover, hypothesis testing as is practiced today often relies on analytical approximations or Monte Carlo methods, where issues arise for such small probability estimates. In fact, there is a specialized area of importance sampling to deal with simulating small probabilities, but the hypothesis testing practice does not seem to have taken advantage of importance sampling.

PCS hypothesis testing builds on the perturbation interval formalism to address the cognitively misleading nature of small p-values. It views the null hypothesis as a useful construct for defining perturbations that represent a constrained data generating process. In this setting, the goal is to generate data that are plausible but where some structure (e.g. linearity) is known to hold. This includes probabilistic models when they are well founded. However, PCS testing also includes other data and/or algorithm perturbations (e.g. simulated data, and/or alternative prediction functions) when probabilistic models are not appropriate. Of course, whether a perturbation scheme is appropriate is a human judgment call that should be clearly communicated in PCS documentation and debated by researchers. Much like scientists deliberate over appropriate controls in an experiment, data scientists should debate the reasonable or appropriate perturbations in a PCS analysis.

4.2.1 Formalizing PCS hypothesis testing

Formally, we consider settings with observable input features , prediction target , and a collection of candidate prediction functions . We consider a null hypothesis that translates the domain problem into a relatively simple structure on the data and corresponds to data perturbations that enforce this structure. As an example, (40)

use a maximum entropy based approach to generate data that share primary features with observed single neuron data but are otherwise random. In the accompanying PCS

documentation, we form a set of perturbations based on sample splitting in a binary classification problem to identify predictive rules that are enriched among active (class-) observations. Below we outline the steps for PCS hypothesis testing.

  1. Problem formulation: Specify how the domain question can be addressed with available data by translating it into a hypothesis testing problem, with the null hypothesis corresponding to a relatively simple structure on data777A null hypothesis may correspond to model/algorithm perturbations. We focus on data perturbations here for simplicity.. Define a prediction target , “appropriate” data perturbation and/or model/algorithm perturbation , prediction function(s) {, training/test split, prediction evaluation metric , a stability metric , and stability target , which corresponds to a comparable quanitity of interest as the data and model vary. Define a set of data perturbations corresponding to the null hypothesis. Document reasons and arguments regarding why these choices are appropriate in the context of the domain question.

  2. Prediction screening: For a pre-specified threshold , screen out models with low prediction accuracy

    (7)

    Examples of appropriate threshold include domain accepted baselines, baselines relative to prediction accuracy on , the top performing models, or models whose accuracy is suitably similar to the most accurate model. We note that prediction screening serves as an overall goodness-of-fit check on the models.

  3. Target value perturbation distributions: For each of the survived models from step 2, compute the stability target under each data perturbation in and . This results in two joint distributions and corresponding to the target value for the observed data and under the null hypothesis.

  4. Null hypothesis screening: Compare the target value distributions for the observed data and under the null hypothesis. The observed data and null hypothesis are consistent if these distributions are not sufficiently distinct relative to a threshold. We note that this step depends on how the two distributions are compared and how the difference threshold is set. These choices depend on the domain context and how the problem has been translated. They should be transparently communicated by the researcher in the PCS documentation. For an example, see the accompanying case study.

  5. Perturbation interval (or region) reporting: For each of the results that survive step 4, summarize the target value perturbation distribution using the stability metric .

We note that the stability target may be a traditional test statistic. In this setting, PCS hypothesis testing first adds a prediction-based check to ensure that the model realistically describes the data and then replaces probabilistic assumptions with perturbation analyses.

4.3 Simulation studies of PCS inference in the linear model setting

In this section, we consider the proposed PCS perturbation intervals through data-inspired simulation studies. We focus on feature selection in sparse linear models to demonstrate that PCS inference provides favorable results, in terms of ROC analysis, in a setting that has been intensively investigated by the statistics community in recent years. Despite its favorable performance in this simple setting, we note that the principal advantage of PCS inference is its generalizability to new situations faced by data scientists today. That is, PCS can be applied to any algorithm or analysis where one can define appropriate perturbations. For instance, in the accompanying PCS case study, we demonstrate the ease of applying PCS inference in the problem of selecting high-order, rule-based interactions in a high-throughput genomics problem (whose data the simulation studies below are based upon).

To evaluate feature selection in the context of linear models, we considered data (as in the case study) for genomic assays measuring the binding enrichment of unique TFs along segments of the genome (41, 42, 43). We augmented this data with order polynomial terms for all pairwise interactions, resulting in a total of features. For a complete description of the data, see the accompanying PCS documentation. We standardized each feature and randomly selected active features to generate responses

(8)

where denotes the normalized matrix of features, for any active feature and otherwise, and represents mean noise drawn from a variety of distributions. In total, we considered 6 distinct settings with 4 noise distributions: i.i.d. Gaussian, Students with degrees of freedom, multivariate Gaussian with block covariance structure, Gaussian with variance and two misspecified models: i.i.d. Gaussian noise with 12 active features removed prior to fitting the model, i.i.d. Gaussian noise with responses generated as

(9)

where denotes a set of randomly sampled pairs of active features.

4.3.1 Simple PCS perturbation intervals

We evaluated selected features using the PCS perturbation intervals proposed in section 4.1. Below we outline each step for constructing such intervals in the context of linear model feature selection.

  1. Our prediction target was the simulated responses and our stability target the features selected by lasso when regressing on . To evaluate prediction accuracy, we randomly sampled of observations as a held-out test. The default values of lasso penalty parameter in the R package glmnet represent a collection of model perturbations. To evaluate the stability of with respect to data perturbation, we repeated this procedure across bootstrap replicates.

  2. We formed a set of filtered models by taking corresponding to the most accurate models in terms of prediction error. Since the goal of our analysis was feature selection, we evaluated prediction accuracy on the held-out test data. We repeated the steps below on each half of the data and averaged the final results.

  3. For each and we let denote the features selected for bootstrap sample with penalty parameter .

  4. The distribution of across data and model perturbations can be summarized into a range of stability intervals. Since our goal was to compare PCS with classical statistical inference, which produces a single p-value for each feature, we computed a single stability score for each feature :

We note that the stability selection proposed in (35) is similar, but without the prediction error screening.

4.3.2 Results

We compared the above PCS stability scores with asymptotic normality results applied to features selected by lasso and selective inference (44). We note that asymptotic normality and selective inference both produce p-values for each feature, while PCS produces stability scores.

Fig. 2 shows ROC curves for feature selection averaged across 100 replicates of the above experiments. The ROC curve is a useful evaluation criterion to assess both false positive and false negative rates when experimental resources dictate how many selected features can be evaluated in further studies, including physical experiments. In particular, ROC curves provide a balanced evaluation of each method’s ability to identify active features while limiting false discoveries. Across all settings, PCS compares favorably to the other methods. The difference is particularly pronounced in settings where other methods fail to recover a large portion of active features (, heteroskedastic, and misspecified model). In such settings, stability analyses allow PCS to recover more active features while still distinguishing them from inactive features. While its performance in this setting is promising, the principal advantage of PCS is its conceptual simplicity and generalizability. That is, the PCS perturbation intervals described above can be applied in any setting where data or model/algorithm perturbations can be defined, as illustrated in the genomics case study in the accompanying PCS documentation. Traditional inference procedures cannot handle multiple models easily in general.

Figure 2: ROC curves for feature selection in linear model setting with (top two rows) and (bottom two rows) observations. Each plot corresponds to a different generative model.

5 PCS recommendation system for scientific hypothesis generation and intervention experiment design

As alluded to earlier, PCS inference can be used to rank target estimates for further studies, including follow-up experiments. In our recent works on DeepTune (27) for neuroscience applications and iterative random forests (iRF) (28) for genomics applications, we use PCS (in the modeling stage) to make recommendations for further studies. In particular, PCS suggested potential relationships between areas of the visual cortex and visual stimuli as well as 3rd and 4th order interactions among biomolecules that are candidates for regulating gene expression.

In general, causality implies predictability and stability over many experimental conditions; but not vice versa. Predictability and stability do not replace physical experiments to prove or disprove causality. However, we hope that computationally tractable analyses that demonstrate high predictability and stability make downstream experiments more efficient by suggesting hypotheses or intervention experiments that have higher yields than otherwise. This hope is supported by the fact that 80% of the 2nd order interactions for the enhancer problem found by iRF had been verified in the literature by other researchers through physical experiments.

6 PCS workflow documentation

The PCS workflow requires an accompanying Rmarkdown or iPython (Jupyter) notebook, which seamlessly integrates analyses, codes, and narratives. These narratives are necessary to describe the domain problem and support assumptions and choices made by the data scientist regarding computational platform, data cleaning and preprocessing, data visualization, model/algorithm, prediction metric, prediction evaluation, stability target, data and algorithm/model perturbations, stability metric, and data conclusions in the context of the domain problem. These narratives should be based on referenced prior knowledge and an understanding of the data collection process, including design principles or rationales. In particular, the narratives in the PCS documentation help bridge or connect the two parallel universes of reality and models/algorithms that exist in the mathematical world (Fig.

3). In addition to narratives justifying human judgment calls (possibly with data evidence), PCS documentation should include all codes used to generate data results with links to sources of data and meta data.

Figure 3: Assumptions made throughout the data science life cycle allow researchers to use models as an approximation of reality. Narratives provided in PCS documentation can help justify assumptions to connect these two worlds.

We propose the following steps in a notebook888This list is reminiscent of the list in the “data wisdom for data science" blog that one of the authors wrote at http://www.odbms.org/2015/04/data-wisdom-for-data-science/:

  1. Domain problem formulation (narrative). Clearly state the real-world question one would like to answer and describe prior work related to this question.

  2. Data collection and relevance to problem (narrative). Describe how the data were generated, including experimental design principles, and reasons why data is relevant to answer the domain question.

  3. Data storage (narrative). Describe where data is stored and how it can be accessed by others.

  4. Data cleaning and preprocessing (narrative, code, visualization). Describe steps taken to convert raw data into data used for analysis, and Why these preprocessing steps are justified. Ask whether more than one preprocessing methods should be used and examine their impacts on the final data results.

  5. PCS inference (narrative, code, visualization). Carry out PCS inference in the context of the domain question. Specify appropriate model and data perturbations. If necessary, specify null hypothesis and associated perturbations (if applicable). Report and post-hoc analysis of data results.

  6. Draw conclusions and/or make recommendations (narrative and visualization) in the context of domain problem.

This documentation of the PCS workflow gives the reader as much as possible information to make informed judgments regarding the evidence and process for drawing a data conclusion in a data science life cycle. A case study of the PCS workflow on a genomics problem in Rmarkdown is available on Zenodo.

7 Conclusion

This paper discusses the importance and connections of three principles of data science: predictability, computability and stability. Based on these principles, we proposed the PCS framework with corresponding PCS inference procedures (perturbation intervals and PCS hypothesis testing). In the PCS framework, prediction provides a useful reality check, evaluating how well a model/algorithm reflects the physical processes or natural phenomena that generated the data. Computability concerns with respect to data collection, data storage, data cleaning, and algorithm efficiency determine the tractability of an analysis. Stability relative to data and model perturbations provides a minimum requirement for interpretability999For a precise definition of interpretability, we refer to our recent paper (45) and reproducibility of data driven results (9).

We make important conceptual progress on stability by extending it to the entire data science life cycle (including problem formulation and data cleaning and EDA before and after modeling). In addition, we view data and/or model perturbations as a means to stabilize data results and evaluate their variability through PCS inference procedures, for which prediction also plays a central role. The proposed inference procedures are favorably illustrated in a feature selection problem through data-inspired sparse linear model simulation studies and in a genomics case study. To communicate the many human judgment calls in the data science life cycle, we proposed PCS documentation, which integrates narratives justifying judgment calls with reproducible codes. This documentation makes data-driven decisions as transparent as possible so that users of data results can determine their reliability.

In summary, we have offered a new conceptual and practical framework to guide the data science life cycle, but many open problems remain. Integrating stability and computability into PCS beyond the modeling stage requires new computing platforms that are future work. The PCS inference proposals, even in the modeling stage, need to be vetted in practice well beyond the case studies in this paper and in our previous works, especially by other researchers. Based on feedback from practice, theoretical studies of PCS procedures in the modeling stage are also called for to gain further insights under stylized models after sufficient empirical vetting. Finally, although there have been some theoretical studies on the simultaneous connections between the three principles (see (46) and references therein), much more work is necessary.

8 Acknowledgements

We would like to thank Reza Abbasi-Asl, Yuansi Chen, Adam Bloniarz, Ben Brown, Sumanta Basu, Jas Sekhon, Peter Bickel, Chris Holmes, Augie Kong, and Avi Feller for many stimulating discussions on related topics in the past many years. Partial supports are gratefully acknowledged from ARO grant W911NF1710005, ONR grant N00014-16-1-2664, NSF grants DMS-1613002 and IIS 1741340, and the Center for Science of Information (CSoI), a US NSF Science and Technology Center, under grant agreement CCF-0939370.

References

  • (1) Popperp KR (1959) The Logic of Scientific Discovery. (University Press).
  • (2) Breiman L, , et al. (2001) Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science 16(3):199–231.
  • (3) Stone M (1974) Cross-validatory choice and assessment of statistical predictions. Journal of the royal statistical society. Series B (Methodological) pp. 111–147.
  • (4) Allen DM (1974) The relationship between variable selection and data agumentation and a method for prediction. Technometrics 16(1):125–127.
  • (5) Turing AM (1937) On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London mathematical society 2(1):230–265.
  • (6) Hartmanis J, Stearns RE (1965) On the computational complexity of algorithms. Transactions of the American Mathematical Society 117:285–306.
  • (7) Li M, Vitányi P (2008) An introduction to Kolmogorov complexity and its applications. Texts in Computer Science. (Springer, New York,) Vol. 9.
  • (8) Kolmogorov AN (1963) On tables of random numbers. Sankhyā: The Indian Journal of Statistics, Series A pp. 369–376.
  • (9) Yu B (2013) Stability. Bernoulli 19(4):1484–1500.
  • (10) Manski CF (2013) Public policy in an uncertain world: analysis and decisions. (Harvard University Press).
  • (11) Fisher RA (1937) The design of experiments. (Oliver And Boyd; Edinburgh; London).
  • (12) Donoho DL, Maleki A, Rahman IU, Shahram M, Stodden V (2009) Reproducible research in computational harmonic analysis. Computing in Science & Engineering 11(1).
  • (13) Stark P (2018) Before reproducibility must come preproducibility. Nature 557(7707):613.
  • (14) Quenouille MH, , et al. (1949) Problems in plane sampling. The Annals of Mathematical Statistics 20(3):355–375.
  • (15) Quenouille MH (1956) Notes on bias in estimation. Biometrika 43(3/4):353–360.
  • (16) Tukey J (1958) Bias and confidence in not quite large samples. Ann. Math. Statist. 29:614.
  • (17) Efron B (1992) Bootstrap methods: another look at the jackknife in Breakthroughs in statistics. (Springer), pp. 569–593.
  • (18) Bolstad BM, Irizarry RA, Åstrand M, Speed TP (2003) A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics 19(2):185–193.
  • (19) Steegen S, Tuerlinckx F, Gelman A, Vanpaemel W (2016) Increasing transparency through a multiverse analysis. Perspectives on Psychological Science 11(5):702–712.
  • (20) Freedman DA (1991) Statistical models and shoe leather. Sociological methodology pp. 291–313.
  • (21) Tibshirani R (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) pp. 267–288.
  • (22) Breiman L (2001) Random forests. Machine learning 45(1):5–32.
  • (23) Wolpert L (1969) Positional information and the spatial pattern of cellular differentiation. Journal of theoretical biology 25(1):1–47.
  • (24) Robbins H, Monro S (1951) A stochastic approximation method. The Annals of Mathematical Statistics 22(3):400–407.
  • (25) Stark PB, Saltelli A (2018) Cargo-cult statistics and scientific crisis. Significance 15(4):40–43.
  • (26) Ioannidis JP (2005) Why most published research findings are false. PLoS medicine 2(8):e124.
  • (27) Abbasi-Asla R, et al. (year?) The deeptune framework for modeling and characterizing neurons in visual cortex area v4.
  • (28) Basu S, Kumbier K, Brown JB, Yu B (2018) iterative random forests to discover predictive and stable high-order interactions. Proceedings of the National Academy of Sciences p. 201711236.
  • (29) Box GE (1976) Science and statistics. Journal of the American Statistical Association 71(356):791–799.
  • (30) Goodfellow I, et al. (2014) Generative adversarial nets in Advances in neural information processing systems. pp. 2672–2680.
  • (31) Skene A, Shaw J, Lee T (1986) Bayesian modelling and sensitivity analysis. The Statistician pp. 281–288.
  • (32) Box GE (1980) Sampling and bayes’ inference in scientific modelling and robustness. Journal of the Royal Statistical Society. Series A (General) pp. 383–430.
  • (33) Peters J, Bühlmann P, Meinshausen N (2016) Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78(5):947–1012.
  • (34) Heinze-Deml C, Peters J, Meinshausen N (2018) Invariant causal prediction for nonlinear models. Journal of Causal Inference 6(2).
  • (35) Meinshausen N, Bühlmann P (2010) Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72(4):417–473.
  • (36) Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958.
  • (37) Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 55(1):119–139.
  • (38) Freedman D, Lane D (1983) A nonstochastic interpretation of reported significance levels. Journal of Business & Economic Statistics 1(4):292–298.
  • (39) Fisher RA (1992) Statistical methods for research workers in Breakthroughs in statistics. (Springer), pp. 66–70.
  • (40) Elsayed GF, Cunningham JP (2017) Structure in neural population recordings: an expected byproduct of simpler phenomena? Nature neuroscience 20(9):1310.
  • (41) MacArthur S, et al. (2009) Developmental roles of 21 drosophila transcription factors are determined by quantitative differences in binding to an overlapping set of thousands of genomic regions. Genome biology 10(7):R80.
  • (42) Li Xy, et al. (2008) Transcription factors bind thousands of active and inactive regions in the drosophila blastoderm. PLoS biology 6(2):e27.
  • (43) Li XY, Harrison MM, Villalta JE, Kaplan T, Eisen MB (2014) Establishment of regions of genomic activity during the drosophila maternal to zygotic transition. Elife 3:e03737.
  • (44) Taylor J, Tibshirani RJ (2015) Statistical learning and selective inference. Proceedings of the National Academy of Sciences 112(25):7629–7634.
  • (45) Murdoch WJ, Singh C, Kumbier K, Abbasi-Asl R, Yu B (2019) Interpretable machine learning: definitions, methods, and applications. arXiv preprint arXiv:1901.04592.
  • (46) Chen Y, Jin C, Yu B (2018) Stability and convergence trade-off of iterative optimization algorithms. arXiv preprint arXiv:1804.01619.