The advent of Next-Generation Sequencing technologies such as RNA sequencing (RNA-Seq), has provided a powerful tool to study transcriptome with unprecedented throughput, scalability, and speed . In recent years, different distributions such as two stage poisson, hierarchical Gaussian-poisson and Negative Binomial (NB) have been used for modeling RNA-seq read counts [4, 2, 5, 3, 6]. Due to over-dispersion property, NB distribution has gained the most attraction for modeling gene counts in RNA-Seq [2, 10, 8, 7, 9], and thereby we adopt it in this paper.
While modeling biological systems of any type, uncertainties show themselves in various forms. For instance while some studies have used data-driven methodologies to address dealing with uncertainties [11, 12, 13], stochastic parametrization is often used to address uncertainties in computational models of brain sub-regions and brain connectivity[14, 15, 16]
. Different algorithms are introduced for robust Bayesian network identification for pathway detection in biomedical data[17, 18, 19]. In [20, 21], an experimental design framework for gene regulatory networks has been proposed, which quantifies the model uncertainty based on its induced intervention cost under a Bayesian framework. In order to systematically quantify the model uncertainties and utilize the prior knowledge in addition to observed data, an optimal Bayesian classifer (OBC) has been introduced in [22, 23]
which possesses minimum expected error across the uncertainty class governing the model. When doing classification (e.g. phenotypic classification) and regression (e.g. biomarker estimation) based on genomic data, where usually small samples are available and even labels might be missing, incorporation of knowledge of pathways, regulating functions and other population statistics to construct prior distributions for optimal Bayesian classification and regression proves helpful[24, 25, 26].
, we follow another direction and examine the optimal way to expand the training set in order to get the optimal Bayesian classifier under uncertainty. In case of being restricted to small sample data, random sampling is not optimal for classifier design [30, 31]. In this work, given a sample set , consisting of
data points, the goal is to select the next data point in such a way as to minimize the expected error of the optimal Bayesian classifier. Here with the assumption of NB distribution for prior class conditional probabilities, the challenge is the lack of a conjugate prior-posterior class conditional probability distribution. To overcome this issue, we have used MCMC approximation of the posteriors, to estimate classification error and choose the class that minimizes the expected error, to take the next sample point from. While this might sound conceptually similar to active learning or Bayesian experiment design paradigms[32, 33], there are vital differences . Those algorithms control the selection of potential unlabeled training points in the sample space to be labeled and used for further training. Similar to , we generate new sample points from a chosen known label with direct target of reducing classification error. Reducing uncertainty in our class probability distributions is a secondary outcome.
Ii-a NB Model for RNA-Seq read counts in different classes
We consider the expression level (count data) for genes in either class, sequenced at two conditions (classes) and . We write the distribution of , the expression level of gene in sample for class as follows :
where is the dispersion parameter of NB distribution corresponding to gene and is the probability parameter of NB distribution corresponding to gene and class . The probability mass function (PMF) of is expressed as
where is the gamma function.
We complete the model by placing conjugate priors on dispersion and probability parameters, in order to model the uncertainty we have about classes. More precisely, the prior distributions are as follows:
Ii-B OBC and Error conditioned sampling
Under the OBC framework, we assume that the actual model belongs to an uncertainty class, , of feature-label distributions parameterized by where (the proportion of class 0 sample points in the population),
is the vector offor all genes, shared for both classes, is the vector of for all genes, in class and is the vector of for all genes, in class . Denoting a test point (in our case, a set of read counts for all genes) by , in a 2 class binary classification framework the optimal Bayesian classifier (OBC) is given by 
is known as the effective density and stands for the expectation over the posterior distribution.
We use the following terminology:
sample point (random variable). A sample point consists of read counts for all the genes in the model.
observed sample points from class 0 in the existing sample points (observed).
observed sample points from class 1 in the existing sample points (observed).
OBC designed with .
OBC designed with .
Classification error of .
For a discrete classifier,
In our algorithm, summarized in Algorithm 1, we first compute the expected cost (expected classification error) incurred if the next sample point comes from each class:
This notation simply means in the already observed training sample points, we have seen points from class and points form class , and now we add a new training point once from class to have that gives us and once from class to form that gives us .
Ii-C Inference via Gibbs sampling
With the choice of model and prior distributions, there is no direct way to calculate (7), but we approximate it using MCMC method. In the following, we present our efficient MCMC inference of model parameters, which takes advantage of a novel data augmentation technique, leading to closed-form parameter updates.
The negative binomial random variable
can be generated from a compound Poisson distribution as
where corresponds to the logarithmic random variable , with the pmf
As shown in , given and , the distribution of is a Chinese Restaurant Table (CRT) distribution,
whose random samples can be generated as
Exploiting the above data augmentation technique, and in addition gamma-Poisson and beta-NB conjugacies, we can derive the update samplings of model parameters in a Gibbs sampling procedure as follows:
where denotes conditioning on all the other parameters.
Iii Simulations and Results
In order to assess the performance of the proposed sampling method, different simulations are run. In each simulation, a fixed set of values is chosen for hyperparameters . In different simulations, parameter values have been chosen in such a way to cover different levels of Bayes error and separability for the two class classification problem. Values of above are not considered, due to symmetry of the problem for two classes.
In each simulation, after fixing hyperparameters, a total of sample points are initially generated to populate class observations as the initial state and a fixed set of test points with known labels is generated to evaluate classifier performances throughout the simulation. Each simulation consists of repetition. In each repetition, 2 scenarios are compared.
In the first scenario, each time a new training point is randomly generated according to class proportions and and after addition of each new point, a new classifier is designed with all the available training points. This is continued until we obtain new training sample points.
In the second scenario, each time a new training point is generated from the class that is determined through our method in Section II and after addition of each new point, a new classifier is designed with all the available training points. This scenario too is continued until we obtain new training sample points. Average performance of classifiers given a certain size of training set is calculated for both methods, through averaging over repetitions. Figure 1 compares classification errors for the two methods, for , , and different values of
. In our problem, two degrees of freedom regarding the Beta function can produce sufficient different levels of Bayes error to deem the study of simultaneous changes in allhyperparameters unnecessary.
It can be seen that in all 3 graphs of Figure 1, the error curve for the proposed controlled sampling method lies below the curve of random sampling. This means that the training set obtained by the proposed method has helped develop an optimal Bayesian classifier with a lower average error than such a classifier trained on a randomly obtained (stratified according to ) training set, i.e. the proposed sampling method beats random sampling. It can further be noticed as the values of and
increase, the classes become less separable, yielding higher values of classification error on the vertical axis. Variance of Beta distribution with parametersand is given by
Hence higher values of and result in more similar classes. It can also be observed in Figure 1 that with other parameters fixed, the efficacy of the proposed method is improved when we go above lower ranges of Bayes error around or below . Another observation which is expected is the that this gain in classification error is seen in small sample sizes as with big training sets enough information is already gathered to suppress the uncertainty in class conditional distributions, hence performances of both methods is expected to converge to each other, and in case of having infinitely many training points, to Bayes error of the problem.
Figure 2 makes a similar comparison for the more unbalanced case of . Proposed sampling method beats random sampling.
Figure 3 shows a different case in the sense that values of and are increased to and , are set equal to to produce a very low Bayes error. . Proposed sampling method beats random sampling. It can also be seen that depending on the structure of the problem and parameter values, even in very low Bayes errors relatively high compared to other scenarios.
Figure 4 shows the simulation results for the more balanced class proportion of . and have been used. While the proposed sampling algorithm still beats random sampling, it can be seen that the relative gain in classification error to the Bayes error is shrinking in this case. However this is not a big concern, as in most biological problems we deal with highly imbalanced data.
This study has further expanded the results of 
, showing how to utilize the prior knowledge about classes to form training sets for classifiers more efficiently than random sampling, for negative binomial distributions which are used in modeling of RNA-Seq read counts. The method is described mathematically and its performance is theoretically studied using MCMC simulations on synthetic data. While efficiency of the proposed method seems to show a positive correlation with Bayes error, it is a function of all model parameters and hence can vary at the same levels of Bayes error. Future works can include expansion of this sampling method to the hierarchical Gaussian-Poisson framework introduced in
for modeling read count data, and also implementing the proposed sampling method on negative binomial models fit on real RNA-Seq datasets. Also employing convolutional neural networks as fully data-driven architectures to merge feature extraction and classification procedures like in[36, 37] can be another area for potential research.
-  Z. Wang, M. Gerstein and M. Snyder, “RNA-Seq: a revolutionary tool for transcriptomics”, Natare Reviews Genetics, vol. 10, no. 1, pp. 57-63, 2009.
-  A. Broumand and T. Hu, ”A length bias corrected likelihood ratio test for the detection of differentially expressed pathways in RNA-Seq data”, 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, 2015, pp. 1145-1149.
-  Jason M. Knight, I, Ivanov, and E.R. Dougherty, “MCMC implementation of the optimal Bayesian classifier for non-Gaussian models: model-based RNA-Seq classification”, BMC Bioinformatics, vol. 15, no. 1, 2014.
-  P. L. Auer and R.W. Doerge, “A two-stage Poisson model for testing RNA-Seq data”, Applications in Genetics and Molecular Biology, vol. 10, no. 1, pp. 1-26, 2011.
-  M. Robinson and G. Smyth, “Moderated statistical tests for assessing differences in tag abundance”, Bioinformatics, vol. 23, no. 21, pp. 2881-2887, 2007.
-  S. Anders and W. Huber, “Differential expression analysis for sequence count data”, Genome Biology, vol. 11, no. 10, R106, 2010.
-  S. Z. Dadaneh, X. Qian, and M. Zhou, “Bnp-seq: Bayesian nonparametric differential expression analysis of sequencing count data”, Journal of the American Statistical Association, in-press, 2017.
-  S. Z. Dadaneh, M. Zhou, and X. Qian, “Bayesian negative binomial regression for differential expression with confounding factors”, Bioinformatics, 2018.
-  S. Z. Dadaneh, and X. Qian, “Bayesian module identification from multiple noisy networks”, EURASIP Journal on Bioinformatics and Systems Biology, vol. 5, no. 1, 2016.
-  M. Imani, and U. Braga-Neto, “Maximum-Likelihood Adaptive Filtering for Partially-Observed Boolean Dynamical Systems”, IEEE Transactions on Signal Processing, 65.2 , pp. 359-371, 2017.
-  F. Mokhtari, W. J. Rejeski, Y. Zhu, G. Wu, S. L. Simpson, J. H. Burdette, and P. J. Laurienti, “Dynamic fMRI networks predict success in a behavioral weight loss program among older adults”, NeuroImage, Vol. 173, pp. 421-433, 2018.
F. Mokhtari, R. E. Mayhugh, C. E. Hugenschmidt, W. J. Rejeski, and P. J. Laurienti, “Tensor-based vs. matrix-based rank reduction in dynamic brain connectivity”,Proc. of the SPIE Medical Imaging Conference, Vol. 10574, Houston, TX, 2018.
-  F. Mokhtari, S. K. Bakhtiari, G. A Hossein-Zadeh, and H. Soltanian-Zadeh, “Discriminating between brain rest and attention states using fMRI connectivity graphs and subtree SVM”, Proc. of the SPIE Medical Imaging Conference, Vol. 8314, 8314-7, San Diego, CA, 2012.
M. Daneshzand, M. Faezipour and B. D. Barkana, “Computational Stimulation of the Basal Ganglia Neurons with Cost Effective Delayed Gaussian Waveforms”,Frontiers in Computational Neuroscience, vol. 11, Aug. 2017.
-  M. Daneshzand, M. Faezipour and B. D. Barkana, “Towards frequency adaptation for delayed feedback deep brain stimulations”, Neural Regeneration Research, vol. 13, no. 3, 2018.
-  M. Daneshzand, M. Faezipour, and B. D. Barkana, “Hyperbolic Modeling of Subthalamic Nucleus Cells to Investigate the Effect of Dopamine Depletion”, Computational Intelligence and Neuroscience, vol. 2017, pp. 1–9, 2017.
-  A. Yazdani, A. Yazdani, A. Samiei and E. Boerwinkle, “Generating a robust statistical causal structure over 13 cardiovascular disease risk factors using genomics data”, Journal of Biomedical Informatics, vol. 60, pp. 114 - 119, 2016.
-  A. Yazdani, A. Yazdani, A. Samiei and E. Boerwinkle, “Identification, analysis, and interpretation of a human serum metabolomics causal network in an observational study”, Journal of Biomedical Informatics, vol. 63, pp. 337-343, 2016.
-  A. Yazdani, A. Yazdani, and E. Boerwinkle, “Conceptual Aspects of Causal Networks in an Applied Context”, Journal of Data Mining in Genomics & Proteomics, vol. 7, no. 2, pp. 1-3, 2016.
-  R. Dehghannasiri, B. Yoon, and E. R. Dougherty, “Optimal experimental design for gene regulatory networks in the presence of uncertainty”, IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 12, no. 4, pp. 938-950, 2015.
-  R. Dehghannasiri, B. Yoon, and E. R. Dougherty, “Efficient experimental design for uncertainty reduction in gene regulatory networks”, BMC Bioinformatics, vol. 16, no. Suppl 13, page S2, 2015.
-  L. A. Dalton and E. R. Dougherty, “Optimal classifiers with minimum expected error within a Bayesian framework Part I: Discrete and Gaussian models”, Pattern Recognition, vol. 46, pp. 1301-1314, 2013.
-  L. A. Dalton and E. R. Dougherty, “Optimal classifiers with minimum expected error within a Bayesian framework Part II: Properties and performance analysis”, Pattern Recognition, vol. 46, pp. 1288-1300, 2013.
-  S. Boluki, M. S. Esfahani, X. Qian, and E.R. Dougherty, “Incorporating biological prior knowledge for Bayesian learning via maximal knowledge-driven information priors”, BMC Bioinformatics, vol. 18, no. 14, 2017.
-  S. Boluki, M. S. Esfahani, X. Qian and E. R. Dougherty, “Constructing Pathway-based Priors Within a Gaussian Mixture Model for Bayesian Regression and Classification”, IEEE/ACM Transactions on Computational Biology and Bioinformatics, doi 10.1109/TCBB.2017.2778715, 2017.
U Braga-Neto, E Arslan, U Banerjee, A Bahadorinejad, “Bayesian Classification of Genomic Big Data” in
Signal Processing and Machine Learning for Biomedical Big Data, CRC Press, 2018.
S. Motiian, M. Piccirilli, D. A. Adjeroh, and G. Doretto, “Unified deep supervised domain adaptation and generalization”,
The IEEE International Conference on Computer Vision (ICCV), Vol. 2, page 3, 2017.
-  S. Motiian,Q. Jones, S. M. Iranmanesh, and G. Doretto, “Few-shot adversarial domain adaptation”, Advances in Neural Information Processing Systems, pp. 6670–6680, 2017.
S. M. Iranmanesh, A. Dabouei, H. Kazemi, and N. M. Nasrabadi, “Deep cross polarimetric thermal-to-visible face recognition”,In International Conference on Biometrics (ICB), 2018.
-  A. Broumand, M. S. Esfahani, B. Yoon, E. R. Dougherty, “Discrete optimal Bayesian classification with error-conditioned sequential sampling”, Pattern Recognition, vol. 48, Issue 11, pp. 3766-3782, 2015.
-  A. Zollanvari, J. Hua, E. R. Dougherty, “Analytical study of performance of linear discriminant analysis in stochastic settings”, Pattern Recognition, vol.46, Issue 11, pp. 3017-3029, 2013.
-  S. Boluki, X. Qian and E. R. Dougherty, “Experimental Design via Generalized Mean Objective Cost of Uncertainty”, arXiv preprint arXiv:1805.01143, 2018.
-  A. Talapatra, S. Boluki, T. Duong, X. Qian, E. R. Dougherty, and R. Arróyave, “Efficient Experiment Design for Materials Discovery with Bayesian Model Averaging”, arXiv preprint arXiv:1803.05460, 2018.
-  N. L. Johnson, A. W. Kemp, and S. Kotz, “Univariate discrete distributions”, John Wiley & Sons, vol. 444, 2005.
-  M. Zhou, and L. Carin, “Negative binomial process count and mixture modeling”, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 37, no. 2, pages 307-320, 2015.
-  S. Soleymani, A. Dabouei, H. Kazemi, J. Dawson, and N.M. Nasrabadi, “Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometric Identification”, 24th International Conference on Pattern Recognition (ICPR), IEEE, 2018.
-  S. Soleymani, A. Torfi, J. Dawson, and N.M. Nasrabadi, “Generalized Bilinear Deep Convolutional Neural Networks for Multimodal Biometric Identification”, IEEE International Conference on Image Processing (ICIP), IEEE, 2018.