Recent technological advancements in data acquisition tools allowed life scientists to acquire multimodal data from different biological application domains. Broadly categorized in three types (i.e., sequences, images, and signals), these data are huge in amount and complex in nature. Mining such enormous amount of data for pattern recognition is a big challenge and requires sophisticated data intensive machine learning techniques. Artificial neural network based learning systems are well known for their pattern recognition capabilities and lately their deep architectures - known as deep learning (DL) - have been successfully applied to solve many complex pattern recognition problems. Highlighting the role of DL in recognizing patterns in biological data, this article presents a comprehensive survey consisting of - applications of DL to biological sequences, images, and signals data; overview of open access sources of these data; description of open source DL tools applicable on these data; and comparison of these tools from qualitative and quantitative perspectives. At the end, it outlines some open research challenges in mining biological data and puts forward a number of possible future perspectives
Understanding pathologies, their early diagnosis and finding cures have driven the life sciences research in the last two centuries [Coleman_biology_1977]. This accelerated the development of cutting edge tools and technologies that allow scientists to study holistically the biological systems as well as unprecedentedly dig down to the molecular details of the living organisms [Magner_history_2002, Brenner_history_2012]. Increasing technological sophistication presented scientists with novel tools for DNA sequencing [shendure_next-generation_2008], gene expression [metzker_sequencing_2010], bioimaging [Vadivambal_bioimaging_2016], neuroimaging [poldrack_progress_2015], and brain-machine interfaces [Lebedev-bmi-2017].
These innovative approaches to study the living organisms produce huge amount of data [quackenbush_extracting_2007] and create a situation often referred as ‘Data Deluge’ [mattmann_computing_2013]. This biological big data can be characterized by their inherent characteristics of being hierarchical (i.e., data coming from different levels of a biological system – from molecules to cells to tissues to systems), heterogeneous (i.e., data acquired by different acquisition methods – from genetics to physiology to pathology to imaging), dynamic (i.e., data changes as a function of time), and complex (i.e., data describing nonlinear biological processes) [li_big_2014]. These intrinsic characteristics of the biological big data posed an enormous challenge to the data scientists to identify patterns and analyze them to infer meaningful conclusions from these data [marx_biology_2013]. This triggered the development of rational, reliable, reusable, rigorous, and robust software tools [li_big_2014] using machine learning (ML) based methods to facilitate recognition, classification, and prediction of patterns in the biological big data [tarca_machine_2007].
The conventional ML techniques can be broadly categorized in two large sets – supervised and unsupervised. The methods pertaining to the supervised
learning paradigm classify objects in a pool using a set of known annotations, alternatively called attributes or features, i.e., learning from a few annotated data samples the remaining data are classified using those annotations. Instead, the techniques in theunsupervised learning paradigm form groups (or clusters) among the objects in a pool by identifying their similarity, i.e., data annotations are first defined and then used for the data classification. Apart, there is a special category called reinforcement learning, that allows a system to learn from the experiences it gains through interacting with its environment, and is out of the scope of this work.
Some of the popular supervised
methods include: ANN and its variants, Support Vector Machines and other linear classifiers, Bayesian Statistics, k-Nearest Neighbors, Hidden Markov Model, and Decision Trees. On the other hand, a number of popularunsupervised
methods are: Autoencoders, Expectation-Maximization, Information Bottleneck, Self-Organizing Maps, Association Rules, Hierarchical Clustering, k-Means, Fuzzy Clustering, and Density-based Clustering. Interested readers may refer to[cheng_nn_review_1994, jain_ann_review_1996, kotsiantis_ml_review_2006] for brief introductory reviews on many of the techniques mentioned above.
The literature is in abundance with reports of successful application of the above mentioned popular ML methods and their respective variants to Biological data coming from various sources. For the sake of simplicity in this review, the Biological data sources have been categorized to a few broad application domains, e.g., Omics (covers data from genetics and [gen/transcript/epigen/prote/metabol]omics [horgan_omic_2011]), Bioimaging (covers data from [sub-]cellular images acquired by diverse imaging techniques), Medical Imaging (covers data from [medical/clinical/health] imaging mainly through diagnostic imaging techniques), and [Brain/Body]-Machine Interfaces or BMI (covers mostly electrical signals generated by the Brain and the Muscles and acquired using appropriate sensors). Each of these application domains (i.e., omics [libbrecht_ml_2015], bioimaging [kan_machine_2017], BMI [vidaurre_machine-learning-based_2010, mala_feature_2014, mahmud_processing_2016], medical imaging [lemm_introduction_2011, erickson_machine_2017]) have witnessed major contributions from diverse ML methods (the ones mentioned above) and their variants.
In recent years Deep Learning (DL), Reinforcement Learning (RL), and deep RL methods are considered to reshape the future of ML (see the schematic diagram in Fig. 1 G) [mnih_human-level_2015]. Despite notable popularity and their applicability to diverse disciplines, there exists no comprehensive review in the literature focusing on Biological data. To fill this gap, this review provides– a brief overview on DL, RL, and deep RL concepts; the state-of-the-art applications of these techniques to Biological data; and a comprehensive list of existing open source libraries and frameworks which can be utilized to harness the power of these techniques. Towards the end, some open issues are identified and some speculative future perspectives are outlined. Finally, working lists of available open access sources of datasets / databases from various application domains are supplied.
As for the organization of the rest of the article, section 1 provides a conceptual overview to the DL technique and introduces the reader to the underlying theory; section 4 presents the reader with brief descriptions of the popular open-source tools, software, and frameworks that implement DL techniques; section 6 provides a comparative study of the various tools’ performances in implementing the defined DL architectures, section 7 provides the reader with some of the open issues and hints on the future perspectives; and finally, the article is concluded in section 8.
1 Overview of Deep Learning
In DL the data representations are learned with increasing abstraction levels, i.e., at each level more abstract representations are learned by defining them in terms of less abstract representations at lower levels [bengio_learning_2009]. Through this hierarchical learning process a system can learn complex representations directly from the raw data [Goodfellow-et-al-2016].
Though many DL architectures have been proposed in the literature for various applications, the ones discussed below are most oftenly used in mining Biological data [mahmud_DL_app_2017].
Autoencoder is a data driven unsupervised NN model mainly used for dimensionality reduction (see Fig. 2a). It mainly projects high dimensional inputs to lesser dimensional outputs. In other words, an input (from input units) is mapped to a hidden unit (from hidden units)
using a nonlinear activation functionwith , where is the encoding weight matrix and
is the bias vector. The projectedis then reconstructed through remapping to an approximated value as , where is the decoding weight matrix and
is the bias vector. Usually Autoencoders use equal number of input and output units with lesser hidden units. However, to represent complex relationships among data, more hidden units with sparsity criteria have also been used. In both cases, the (non)linear transformations incorporated in the hidden units mainly perform the compression[baldi_autoencoder_2012]. In the learning process, the goal of an autoencoder is to minimize the reconstruction error - for a given set of parameters - between and . Thus, the objective function is given by:
where, is the sparsity parameter, is the relative entropy to measure how th hidden unit’s average activation () diverges from target average activation (), and is the reconstruction error for training set with samples.
The Deep Autoencoder (DA) architecture, also known as ‘Stacked Autoencoder’, (see Fig. 2
b) is obtained by stacking several Autoencoders where the activation values of one autoencoder’s hidden unit become input to the next autoencoder, and backpropagation with gradient based algorithm is used to obtain the optimal weights. But this suffers from poor local minima problem which is overcome by pretraining the network with greedy layer-wise learning[shen_dl_mia_2017].
Despite the pretraining stage and vanishing error problem [bengio_vanishing-gradient_1994]
, DA is a popular data compressing DL architecture with quite a few variants, e.g., Denoising Autoencoder[vincent_denoising_autoencoders_2010], Sparse Autoencoder [ranzato_sparse_autoencoders_2006], Variational Autoencoder [kingma_variational_auto_encoding_2014]
, and Contractive Autoencoder[rifai_contractingauto_encoders_2011].
1.2 Deep Belief Network
Restricted Boltzmann Machine (RBM, Fig. 3
a), also considered as nonlinear feature detector, is an undirected probabilistic generative model capable of representing specific probability distributions[Salakhutdinov_dbm_2009]. It contains one visible layer and one hidden layer with symmetric connections () between them, with and as bias values for the visible and hidden layers, respectively. Generally, the visible layer contains units for the input observations, and the hidden layer contains units to model their relation with the observations. The symmetrical connections make RBMs usable as Autoencoders, and the joint probability of is given by [zhou_dl_mia_book_2017]:
where, , is a partition function derived from possible (, ) pairs, and is the energy function which - for a generic case of binary visible and hidden units - is described as:
Here the conditional probability distributions of visible given hidden units and hidden given visible units are computed as -and respectively, with
as a logistic sigmoid function. Now, as the hidden units of RBM are unobservable, the objective function can be defined using the marginal distribution of the visible units only as:
Training RBM parameters are done by maximizing the log-likelihood of the observations through a contrastive divergence algorithm. Gibbs sampling technique[geman_gibbs_sampling_1984] is used to approximate the expected values of the distribution and calculate the gradient descent [zhou_dl_mia_book_2017].
Stacking multiple RBMs as learning elements leads to a popular DL architectures known as Deep Belief Network (DBN, Fig. 3b) where one RBM’s latent layer is connected to the subsequent RBM’s visible layer. Therefore, a DBN contains one visible layer and hidden layers . With downwards directed connections except the top two undirected layers, DBN is a hybrid model combining undirected graphical model and directed generative model [hinton_dbn_2006]
. The joint distribution of the visible units () and hidden layers () is given by:
with , and denotes the joint distribution between layers and
. Individual layers are pretrained in layerwise greedy fashion using unsupervised learning and perform generative fine tuning depending on the required outcome of the model[ravi_dl_2017]. Nonetheless, the training process remains computationally expensive.
1.3 Convolutional Neural Network
CNN (Fig. 4
) is a multilayer NN model, comprised of convolutional layers (often interfused with subsampling layers) followed by fully connected layers, that mimics the locally sensitive, orientation selective neurons in the visual system[lecun_cnn_1998]. CNN is designed to handle multidimensional locally correlated inputs, e.g. the 2D structure of an image or speech signal, and to avoid overfitting by sharing weights which also makes it easier to train with lesser parameters compared to a fully connected network with equal hidden units. These facilitated the wide usage of CNN in problems with large number of hidden units and training parameters.
A convolutional layer recognizes local patterns in terms of features from the input feature maps through learnable filter kernels - . These convolution filters (CF) mainly represent connection weights between feature maps and belonging to the layers and respectively. The activations of a convolution layer’s units () are computed by convolving activations of a vicinal subset of units from the preceding layer’s feature maps () with the filter kernels () as:
where is the total feature maps in layer, is the convolution operator, is the bias at layer , and is the nonlinear activation function [bouvrie_notes_2006].
A suitable pooling layer reduces the feature maps at every pooling step between the subsequent layers. These interspersed pooling layers, thus, reduces computational times and make CNN invariant to small spatial shifts. Also, because of the feature reduction at every applied step, only limited amount of features are eventually supplied to the fully connected network to classify.
When a convolutional layer is followed by a pooling layer , a block of units in a feature map from layer are connected to a single unit of a feature map in layer . The associated sensitivity map for layer is calculated as:
where is the activation function’s derivative evaluated using preactivations of convolutional layer , and is the upsampling operation.
When a current layer (pooling or covolutional) is followed by a convolutional layer, it is important to identify the correspondence in the feature maps between the two layers, i.e. the mapping between the current layer’s patch and the next layer’s unit in the feature maps. The gradients for the kernel wights are calculated using chain rule, and as the weights are being shared across multiple connections, they are given by:
where is the patch in the th feature map () which is elementwise multiplied by the kernel () during convolution to compute the element at in the output convolution feature map [zhou_dl_mia_book_2017].
Nonetheless, in case of very large datasets training even this kind of network can be daunting and can be solved using sparsely connected networks. Some of the popular CNN configurations include: AlexNet [krizhevsky_alexnet_2012], VGGNet [simonyan_vgg_2014], and GoogLeNet [szegedy_googlenet_2015].
1.4 Recurrent Neural Network
RNN (Fig. 5) is a NN model that detects sequences in streams of data. It computes the current state’s output () for a given input () depending on the outputs of the previous states (captured by ) [elman_finding_1990]:
is a nonlinear function (e.g., tanh, ReLU[zeiler_relu_2013]), and and are shared weight matrices. In other words, RNN learns a distribution over classes for a sequence of inputs (e.g., ). As for the classification, generally a softmax, following few fully connected layers, is added for mapping the classes:
where is the output weight matrix, and is the set of parameters shared across different states.
Due to this ‘memory’-like property RNN gained popularity in many fields involving streaming data (e.g., text mining, time series, genomes, etc.). However, the backpropagating gradients- from the output through time- create learning problems similar to the conventional deep NN (e.g., vanishing and exploding gradients) [lipton_critical_2015]. In recent years, development of specialized memory units allowed expansion of classical RNN to useful variants including– bidirectional RNN (BRNN) [schuster_bidirectional_rnn_1997]
, long short-term memory (LSTM)[hochreiter_lstm_1997]
, and gated recurrent units[cho_grnn_2014]. Though RNN’s primary application remains with sequential data, it is also increasingly applied to other data, e.g. images [litjens_mia_2017].
|Sequences||TCGA database||DA [danaee2016]||Cancer detection & gene identification|
|Protein Data Bank (PDB)||DA [li_2016]||Protein structure reconstruction|
|GWH & UCSC datasets||DBN [Lee3045382]||Prediction of splice junctions|
|SRBCT, Prostate Tumor, and MLL GE||DBN ||
Gene/MiRNA feature selection
|sbv IMPROVER||DBN [Chen2015]||Human diseases & drug development|
|iONMF dataset||DBN [pan_rbm_2017]||RNA-protein binding motifs|
|ENCODE:||CNN [Denas2013DeepMO, Kelley2016]||Gene expression identification|
|JASPAR database, & ENCODE||CNN [DBLPZengELG16]||Predicting DNA–protein binding|
|ENCODE DGF||CNN [citeulike13721890]||Predict noncoding-variant of Gene|
|UCSC, CGHV Data, SPIDEX database||CNN [Huang069682]||genetic variants identification|
|CullPDB, CB513, CASP datasets, CAMEO||CNN [DBLPWang0MX15]||2ps prediction|
|DREAM 5||CNN [alipanahi_deepbind_2015]||DNA/RNA sequence prediction|
|miRBoost||RNN [DBLPParkMCY16]||micro-RNA Prediction|
|miRNA-mRNA pairing data repository||RNN (LSTM) [DBLPLeeBPY16]||micro-RNA target prediction|
|Images||MITOS dataset||CNN [Ciresan2013]||Mitosis detection in breast cancer|
|EM Segmentation Challenge||CNN [NIPS2012_4741], RNN [DBLPStollengaBLS15]||Segment neuronal membranes,|
|biomedical volumetric image|
|BRATS Dataset||CNN [DBLPHavaeiGLJ16]||Brain pathology segmentation|
|ADNI MRI dataset||CNN [DBLPHosseiniAslGE16], DBN [Suk2014569, Li2014]||AD Diagnosis|
|IBSR, LPBA40 & OASIS dataset||CNN [Kleesiek2016460]||Skull stripping|
|TBI dataset||CNN [kamnitsas_3dcnn_2017]||Brain lesion segmentation|
|CT dataset||CNN [Fritscher2016]||Fast segmentation of 3D medical images|
|PACS Dataset||CNN [DBLPChoLSCD15]||Medical image classification|
|LIDC-IDRI dataset||CNN [DBLPChoLSCD15]||Lung nodule malignancy classification|
|ADNI dataset||DBN [Suk2014569, Li2014]||AD/MCI diagnosis|
|ADHD-200||DBN ||ADHD detection|
|MICCAI 2009 LV||DBN [ngo_dl_hrt_2017]||Heart LV segmentation|
|Signals||MAHNOB-HCI||DA [jirayucharoensak2014]||Motion action decoding|
|BCI Competition IV||DBN [lu_rbm_mi_2016], CNN [yang2015, tabar_cnn_mi_eeg_2017, sakhavi_mi_2015]||Motion action decoding|
|DEAP dataset||DBN [li_dbn_as_2013, 7033556]||Motion action decoding|
|DEAP dataset||CNN [IAAI1715007]||Emotion classification|
|Freiburg dataset||CNN [Mirowski20091927]||Seizure prediction|
|Ninapro database||DBN , RNN [DBLPAtzoriCM16]||Motion action decoding|
|MIT-BIH arrhythmia database||DBN [wu_ecg_dbn_2016, DBLPYanQWZ0W15]||Classification of ECG Arrhythmias|
|MIT-BIH, INCART, & SVDB||CNN [DBLPAtzoriCM16]||Movement decoding|
2 Deep Learning and Biological Data
Many studies have been reported in the literature which employ diverse DL architectures with related and varied parameter sets (see section 1) to analyze patterns in biological data. A summary of these studies which use open access data is reported in table 1.
Stacked Denoising DA was employed by Danaee et al. to extract features for cancer diagnosis and classification along with the identification of the related genes from GE data [danaee2016]. A template based DA learning model was proposed by Li et al. to reconstruct the protein structures [li_2016]. Lee et al. applied a DBN based unsupervised method to perform the auto-prediction of splicing junction at the level of DNA [Lee3045382]
. Combining DBN with active learning, Ibrahim et al. devised a method to select feature groups from genes or microRNAs (miRNAs) based on expression profiles. For translational research, bimodal DBNs were used by Chen et al. to predict responses of human cells using model organisms [Chen2015]. Pan et al. applied a hybrid CNN-DBN model on RNAs for the prediction of RNA binding protein (RBP) interaction sites and motifs [pan_rbm_2017], and Alipanahi et al. used CNN to predict sequence specificities of [D/R]BPs [alipanahi_deepbind_2015]. Denas and Taylor used CNN to preprocess ChIP-seq data and created gene transcription factor activity profiles [Denas2013DeepMO]. CNN was used by Kelley et al. to predict DNA sequence accessibility [Kelley2016], by Zeng et al. to predict the binding between DNA and protein [DBLPZengELG16], by Zhou et al. [citeulike13721890] and Huang et al.[Huang069682] to find noncoding GV, and by Wang et al. to predict secondary protein structure [DBLPWang0MX15]. Park et al. used LSTM to predict miRNA precursor [DBLPParkMCY16] and Lee et al. [DBLPLeeBPY16] used it to predict miRNA precursors’ targets.
CNN was used by Ciresan et al. on histology images of the breast to find mitosis [Ciresan2013] and Stollenga et al. used it to segment neuronal structures in Electron Microscope Images (EMI) [NIPS2012_4741]. Havaei et al. used CNN to segment brain tumor from MRI [DBLPHavaeiGLJ16] and Hosseini et al. used it for the diagnosis of AD from MRI [DBLPHosseiniAslGE16]. DBM [Suk2014569] and RBM [Li2014] were used in detecting Alzheimer’s Disease (AD) and Mild Cognitive Impairment (MCI) from MRI and PET scans. Again, CNN was used on MRI to detect neuroendocrine carcinoma [Kleesiek2016460]. CNN’s dual pathway version was used by Kamnitsas et al. to segment lesions related to tumors, traumatic injuries, and ischemic strokes [kamnitsas_3dcnn_2017]. CNN was also used by Fritscher et al. for volume segmentation [Fritscher2016] and by Cho et al. to find anatomical structures (Lung nodule to classify malignancy) [DBLPChoLSCD15] from CT scans. DBN was applied on MRIs to detect Attention Deficit Hyperactivity Disorder  and on cardiac MRIs to segment the heart’s left ventricle [ngo_dl_hrt_2017].
Jirayucharoensak et al. used PCA to extract power spectral densities from each EEG channel, which were then corrected by covariate shift adaptation, finally stacked DA was used to detect emotion [jirayucharoensak2014]. DBN was applied to decode motor imagery (MoI) through classifying EEG signal frequency information [lu_rbm_mi_2016]. For a similar purpose CNN was used covering large frequency ranges with augmented common spatial pattern features [yang2015]. In a rather different approach using DA, features based on combined selective location, time, and frequency attributes were classified [tabar_cnn_mi_eeg_2017]. Li et al. used DBN to extract low dimensional latent features, and select critical channels to classify affective state using EEG signals [li_dbn_as_2013]. Also, Jia et al. used an active learning to train DBN and generative RBMs for the classification . Tripathi et al. utilized DNN and CNN based model for emotion classification using the DEAP dataset [IAAI1715007]. CNN was employed to predict seizures through synchronization patterns classification from Freiburg dataset [Mirowski20091927]. DBN  and CNN [DBLPAtzoriCM16] were used to decode motion action from Ninapro database. The later approach was also used on MIT-BIH, INCART, & SVDB repositories [DBLPAtzoriCM16]. Moreover, the ECG Arrhythmias were classified using DBN [wu_ecg_dbn_2016, DBLPYanQWZ0W15] from the data supplied by MIT-BIH arrhythmia database.
3 Open Access Biological Data Sources
Saccharomyces Genome Database (SGD) provides complete biological information for the budding yeast Saccharomyces cerevisiae. They also gives a open source tool for searching and analyzing these data, and thereby enable the discovery of functional relationships between sequence and gene products in fungi and higher organisms. The Study of Genome expression, transcriptome and computational biology are main function of the SGD.
The PubChem database contains millions of compound structures and descriptive datasets of chemical molecules and their activities against biological assays. Maintained by the National Center for Biotechnology Information of the United States National Institutes of Health, it can be freely accessed through a web user interface and downloaded via FTP. It also contains software services (such as plotting and clustering). It can be use for [Gen/Prote]omics study and Drug design.
The Encyclopedia of DNA Elements (ENCODE) is a whole-genome database curated by the ENCODE Consortium which is composed primarily of scientists who were funded by US National Human Genome Research Institute. It contains genome datasets (including meta data) of human/mouse.
Molecular Biology Databases (MBD) at the UCI contains three molecular biology databases: i) Protein Secondary Structure [GE5], which is a bench repository that classifies secondary structure of certain globular proteins; ii) Splice-Junction Gene Sequences [GE6], which contains primate splice-junction gene sequences (DNA) with associated imperfect domain theory; and iii) Promoter Gene Sequences [GE7], which contains E. Coli promoter gene sequences (DNA) with partial domain theory. Objectives- i) Sequencing and predicting the secondary structure of certain proteins; ii) Study primate splice-junction gene sequences (DNA) with associated imperfect domain theory; iii) Study E. Coli promoter gene sequences (DNA) with partial domain theory.
The International Nucleotide Sequence Database Collaboration [GE11], popularly known as INSDC, corroborates biological data from three major sources: i) DNA Databank of Japan [GE11-1], ii) European Nucleotide Archive [GE11-2], and iii) GenBank [GE11-3]. These sources provide the spectrum of data raw reads, though alignments and assemblies to functional annotation, enriched with contextual information relating to samples and experimental configurations.
Nature Scientific data (NSD) includes omics data; taxonomy and species diversity; mathematical and modelling resources; cytometry; organism-focused resources and Health science data.This can be used for studying and modelling different aspect of Genomics
The Small Molecule Pathway Database (SMPDB) includes 618 molecule pathways found in humans. This data are used for drug design, understanding gene / metabolite and protein complex concentration.
3.1.8 TCGA database
The Cancer Genome Atlas (TCGA) contains more than two petabytes of genomic data of multi dimensional maps of prime genomic deviation in 33 categories of cancer. These data are generated by National Cancer Institute (NCI) and the National Human Genome Research Institute (NHGRI). This database is used to study genomic information for improving the prevention, diagnosis, and treatment of cancer.
Protein Data Bank (PDB) contain more than 135 thousand data of proteins, nucleic acids, and complex assemblies. These can be used to understand all aspects of biomedicine and agriculture.
Gene Expression Model Selector (GEMS) includes microarray GeEx Data. Cancer Diagnosis and Biomarker Discovery are the two key objective of this dataset.
Cancer Program Datasets (CPD) includes Nearest Template Prediction (NTP), Parallel sequencing, Subclass Mapping (PSSM), DNA Microarray, gene sequence and different disease datasets.
3.1.12 Cancer GeEx
Cancer gene expression (GeEx) contains different cancer datasets which can be employed for designing tool/algorithm for disease detection
3.1.13 iONMF dataset
iONMF dataset contains Yeast RPR and RNA binding protein datasets. This datasets is used for analyzing multiple RNA binding proteins.
3.1.14 JASPAR database
JASPAR database is a database for transcription factor DNA binding profile.
SysGenSim includes bioinformatics tool, and Pula-Magdeburg single-gene knockout, StatSeq and DREAM 5 benchmark datasets for studying Gene Sequence.
The genomes of eukaryotes containing at least 100 miRNAs. This dateset is use for Studying post-transcriptional gene regulation (PTGeR)/miRNA-related pathology.
The Indian Genetic Disease Database (IGDD) tracks of mutations in the normal genes for genetic diseases reported in India Retrieve and study genetic disorders is the main objective of this database.
|[GE1]||SGD||Provides biological data for budding yeast and analysis tool||Genome expression &, transcriptome|
|[GE2]||PubChem||Contains compound structures, molecular datasets and tool||[Gen/Prote]omics study & Drug design|
|[GE3]||ENCODE||Encyclopedia of DNA Elements||Genome study and their functions|
|[noauthor_uci_nodate]||MBD-UCI||Contains three molecular biology databases||i) Sequencing and study gene sequences|
|[noauthor_international_nodate]||INSDC||Includes nucleotide sequence data||Study and analyze nucleotide sequence|
|[GE12]||NSD||Includes omics and Health science data||Study different aspect of Genomics|
|[GE13]||SMPD||Includes 618 molecule pathways found in humans||Drug design and understand GeEx|
|[noauthor_cancer_nodate]||TCGA database||Contains more than two petabytes of genomic data||Study genomic for cancer treatment.|
|[noauthor_rcsb_nodate]||PDB||Proteins, nucleic acids, and complex assemblies data||Understand all aspects of biomedicine|
|[noauthor_gems:_nodate]||GEMS||Microarray Gene Expression Data||Cancer Diagnosis & Biomarker Discovery|
|[noauthor_cancerds_nodate]||CPD||contains sequence and different disease datasets||Disease detection|
|[noauthor_bioinformatics_nodate]||Cancer GeEx||It contains different cancer datasets||Disease detection|
|[mstrazar_ionmf:_2017]||iONMF dataset||It contains Yeast RPR and RNA binding protein datasets||Analysis of RNA binding proteins|
|[noauthor_jaspar_nodate]||JASPAR database||A database for transcription factor binding profile||Study binding profile|
|[noauthor_sysgensim_nodate]||SysGenSim||Bioinformatics tools and gene sequence dataset||Gene Sequence study|
|[noauthor_mirboost_nodate]||miRBoost||The genomes of eukaryotes containing at least 100 miRNAs||Study PTGeR/miRNA- pathology.|
|[GE14]||IGDD||Tracks of mutations in the normal genes||Retrieve and study genetic disorders|
3.2.1 Image Science Database
This database includes different Biological imaging database of acute lymphoblastic leukemia image, cell centeredimage iibrary, Euro-BioImaging BioSharing Collection, Hematology Images, Microscopic World Image Gallery, Molecular Expressions Photo Gallery, and Medical Imaging database such as Chest Radiograph, Mammography, MedPix Medical and Retinal Image etc.
It presents cell image datasets and Cell Library app. The aim of this dataset is to study cell biology.
Berkeley Drosophila Transcription Network Project (BDTNP) contains 3D Gene expression data, In-vivo DNA binding data as well as Chromatin Accessibility data (ChAcD). Research on gene expression and detect anomaly are the key applications of this dataset.
It provides biological and biomedical imaging data. The analysis of image data in bioimaging is the prime objective of this dataset.
The Cell Centered Database (CCDB) provides API for high resolution 2/3/4D data from e-microscope and software tools to analyze the images.
3.2.7 JCB Data Viewer
JCB Data Viewer facilitates viewing, analysis, and sharing of multi-D image data.for Analyzing cell biology.
3.2.8 MITOS dataset
MITOS dataset contains breast cancer histological images (haematoxylin and eosin stained slides). The Detection of mitosis and evaluation of nuclear atypia are key uses.
The Internet Brain Segmentation Repository (IBSR) gives segmentation results of MRI data. Development of segmentation methods is the main application of this IBSR..
The LONI Probabilistic Brain Atlas (LPBA40) contains maps of brain anatomic regions of 40 human volunteers. Each map generates a set of whole-head MRI whereas each MRI describes to identify 56 structures of brain, most of them lies in the cortex. The Study of skull-stripped MRI volumes, and classification of the native-space MRI, probabilistic maps are key uses of LPBA40.
3.2.11 Adhd 200
Attention Deficit Hyperactivity Disorder (ADHD) dataset includes 776 resting-state fMRI and anatomical datasets which are fused over the 8 independent imaging sites. The phenotypic information includes: age, sex, diagnostic status, measured ADHD symptom, intelligence quotient (IQ) and medication status. Imaging-Based Diagnostic Classification is the main aim of ADHD 200 dataset.
3.2.12 Open fMRI
It contains MRI & EEG datasets to study brain regions and its functions.
The Open Access Series of Imaging Studies contains MRI datasets and open source data management platform (XNAT) to study and analyze Alzheimer’s Disease.
Neurosynth includes fMRI literature (with some datasets) and synthesis platform to study Brain structure, functions and disease.
3.2.15 UK data service
fMRI dataset contains fMRI datasets which can be useful for studying brain tumour surgical planning
The Autism Brain Imaging Data Exchange (ABIDE) includes autism brain imaging datasets for studying autism spectrum.
3.2.17 Open NI
Open Neuroimaging dataset contains imaging Modalities and brain diseases data which can be used to study decision support system for disease identification
The Neuroimaging Informatics Tools and Resources Clearinghouse contains range of imaging data from MRI to PET, SPECT, CT, MEG/EEG and optical imaging for analyzing Functional and structural neuroimages.
Alzheimer’s Disease Neuroimaging Initiative (ADNI) includes mild cognitive impairment (MCI), early AD and elderly control subjects diagnosis data. for detecting and tracking of Alzheimer’s disease
3.2.20 Brain development
It provides neuroimaging data and toolkit software to Identify normal, healthy subjects.
This is a web-based repository (API) for collecting and sharing statistical maps of the human brain to Study human brain regions.
The Cancer Imaging Archive (TCIA) contains CT, MRI, and nuclear medicine (e.g. PET) images for Clinical diagnostic, biomarker and cross-disciplinary investigation.
|[Bio2]||Image Science||Biological and medical imaging databases||Cellular image analysis & visualization|
|[Bio3]||Bioimaging||Genetic/chemical and cell/tissue phenotypes databases||Feature extraction & anomaly detection|
|[Bio4]||CellImageLibraryCell||image datasets and Cell Library app.||Study cell biology|
|[Bio5]||BDTNP||3D Gene expression , DNA binding data & ChAcD||Gene expression and detect anomaly|
|[Bio6]||EuroBioimaging||biological and biomedical imaging data||Analyze bioimaging|
|[Bio7]||CCDB||API for high resolution 2/3/4D EM data||Analyzes bioimage|
|[Bio8]||JCB Data Viewer||viewing, analysis, and sharing of multi-D image data.||Analyse cell biology|
|[noauthor_mitos-atypia-14_nodate]||MITOS dataset||Breast cancer histological images||Evaluation of nuclear atypia|
|[noauthor_nitrc:_nodate]||IBSR||Segmentation results of MRI data.||Development of segmentation methods.|
|[shattuck_construction_2008]||LPBA40||Maps of brain regions and a set of whole-head MRI.||Study MRI and map brain region|
|[noauthor_adhd200_nodate]||ADHD-200||fMRI/anatomical datasets fused over the 8 imaging sites||Imaging-Based Diagnostic Classification|
|[NM2]||OpenfMRI||MRI &EEG datasets||Study brain regions and functions|
|[NM3]||OASIS||MRI datasets and XNAT data management platform||Alzheimer’s Disease Research|
|[NM4]||Neurosynth||fMRI datasets and synthesis platform||Brain structure, functions and disease|
|[NM5]||UK data service||fMRI dataset||Brain tumour surgical planning|
|[NM6]||ABIDE||Autism brain imaging datasets||Study autism spectrum|
|[NM7]||Open NI||Imaging Modalities and brain diseases data||study DSS for disease identification|
|[Nm8]||NITRC||MRI, PET, SPECT, CT, MEG/EEG and optical imaging||Functional/structural neuroimage analysis|
|[NM9]||ADNI||MCI, early AD and elderly control subjects’ diagnosis data.||Early detection of Alzheimer’s disease|
|[NM10]||Brain development||It provides neuroimaging data and toolkit software||Identify normal, healthy subjects|
|[NM11]||NeuroVault.org||API for collecting and sharing statistical maps of brain||Study human brain regions|
|[NM12]||TCIA||CT, MRI, and PET images||Diagnoses and biomarker investigation|
3.3 [Brain/Body]-Machine Interfaces (BMI)
3.3.1 BCI Competition Dataset
The BCI Competition datasets include EEG datasets (such as Cortical negativity or positivity, feedback test trials, self-paced key typing, P300 speller paradigm, motor/mental imagery data, continuous EEG; EEG with eye movement), ECoG datasets (such as finger movement, motor/mental imagery data in EEG/ECoG) and MEG dataset(such as wrist movement). These datasets can be used for signal processing and classification methods for BMI.
A Database for Emotion Analysis using Physiological Signals (DEEP) provides various datasets for analyzing the human affective states. The EEG and sEMG of 32 volunteers were generated while watching music videos to analyze the affective states These volunteer also rated the video and The front face was also recorded for 22 volunteer with consent.
The NinaPro database includes of the kinematic as well as the sEMG data of 27 subjects while these subjects were moving finger, hand and wrist. These data can be employed to study Biorobotics
3.3.4 UCI ML repository
This repository contains datasets of using 2 lead ECG (m-HEALTH), ECG of heart-attacks patients, arrhythmia, 64 electrode EEG, 2 mental state (Relax), EMG of Lower Limb, sEMG Brain decoding and anomaly detection are the focused application of this dataset.
This sites contains neuroelectric and myoelectric databases (EEG, EHG, and ECG databases), waveform databases, multi-parameter databases, CHB-MIT Scalp EEG Database, EOG datasets, EEG motor movement/imagery datasets, ERP based BCI recording. The MIT-BIH Supraventricular Arrhythmia Database, the Physionet Normal Sinus Rhythm Database (NSRDB), the Physionet Supraventricular Arrhythmia Database (SVDB) are also the part of Phyionet. Epileptic seizure onset detection and treatment, Modelling and development of the BMI instrumentation are some of the targeted applications of this database.
3.3.6 BNCI Horizon 2020
This databse contains more than 25 datasets such as stimulated EEG datasets, ECoG-based BCI datasets, ERP-based BCI datasets, Mental arithmetic, motor imagery (extracted from EEG, EOG, fNIRS EMG) datasets,Neuroprosthetic control of an EEG/EOG datasets, speller datasets and so on. Modelling and designing of BMI devices are the key application of this database.
MAHNOB-HCI datasets produces a ECG and EEG database for affect recognition and implicit tagging (stimulated by fragments of movies and pictures).
DECAF is a multimodal dataset for decoding user physiological responses to affective multimedia content. It contains magnetoencephalogram (MEG), horizontal electrooculogram (hEOG), ECG, Trapezius muscle-EMG, near-infrared face video data to study Physiological and mental states.
3.3.9 Brain signals data
This datasets includes event-related potential (ERP), event-related synchronization (ERD), epileptic seizure studies, brain mapping (including fMRI data).
3.3.10 Tele Ecg
TELE-ECG dataset includes 250 ECG records with annotated QRS and artifact masks. It also includes QRS and artifact detection algorithms to Study QRS and artifact detection from the ECG signal.
3.3.11 Limo Eeg
This dataset includes Raw EEG data, and Group level covariate describing age of subjects and channel location describing all electrode.
This is a 128-channel EEG dataset which can be used to detect anomaly in the EEG signal.
The EEG database contains invasive EEG recordings of 21 intractable focal epilepsy patients.
3.3.14 Facial s-EMG
This is a 128-channel EEG data of single subject. This dataset can be used to study the Muscles potentials,
|[BCI01]||BCI Competition||EEG, ECoG and MEG dataset||Signal processing/ classification|
|[BMI5]||DEAP||EMG/EEG data (while watching music/videos)||Database for Emotion Analysis|
|[noauthor_ninapro_nodate]||Ninapro database||Kinematic as well as the sEMG data of 27 subjects||Study Biorobotics|
|[BMI1]||UCI ML repository||Various ECG, ECG, EMG, sEMG datasets||Brain decoding and anomaly detection|
|[BMI2]||Physionet||Various recorded physiologic signals||seizure detection; Study BMI|
|[BMI3]||BNCIHorizon2020||Various BMI signals datasets||Designing BMI devices|
|[BMI4]||MAHNOB-HCI||ECG/EEG database for affect recognition/implicit tagging||Affect Recognition study|
|[BMI6]||DECAF||MEG, hEOG, ECG, Trapezius muscle-EMG, face video data||Study Physiological and mental states|
|[BMI7]||Brain signals data||ERP, ERD, Epileptic seizure studies, Brain mapping||Seizure studies & Brain mapping|
|[BMI8]||TELE ECG||250 ECG records with annotated QRS and artifact masks.||Study QRS and artifact detection.|
|[BMI9]||LIMO EEG||Raw EEG data of different age group||BMI study|
|[BMI10]||ESSMN||A 128-channel EEG dataset||Anomaly detection|
|[BMI11]||EEG||Invasive EEG recordings of 21 intractable epilepsy patients||Study epilepsy|
|[BMI12]||Facial s-EMG||This is a 128-channel EEG single subject dataset||Muscles potential study|
4 Open Source Deep Learning Tools
Due to surging interest and concurrent multidisciplinary efforts towards DL in the recent years, several open source libraries, frameworks, and platforms are made available to the community. In the following sections, the popular open source tools, which aim to facilitate the technological developments for the community, are reviewed and summarized. This comprehensive list contains tools (also developed by individuals) which are well maintained with a reasonable amount of implemented algorithms. For the sake of brevity, the individual publication references of the tools are omitted and interested readers may consult them at their respective websites from the provided urls.
Table 5 summarizes the main features and differences of the various tools. To measure the impact and acceptability of a tool in the community, we provide GitHub based measures such as, numbers of Stars, Forks, and Contributors. These numbers are indicative of the popularity, maturity, and diffusion of a tool in the community.
|Sl. No.||Tool||Platform||Language(s)||Stars*||Forks*||Contrib.*||Supported DL Architecture|
|1||Apache Singha||L, M, W||Py, C++, Ja||1117||258||30||CNN, RNN, RBM, DBM|
|2||Caffe||L, M, W, A||Py, C++, Ma||20858||12802||249||CNN, RNN|
|3||Chainer||L||Py||3063||814||140||DA, CNN, RNN|
|4||DeepLearning4j||L, M, W||Ja||7465||3717||127||DA, CNN, RNN, RBM, LSTM|
|5||DyNet||L||C++||1856||461||85||CNN, RNN, LSTM|
|6||HO||L, M, W||Ja, Py, R||2530||1027||94||CNN, RNN|
|7||Keras||L, M, W||Py||20822||7578||548||CNN, RNN, DBN|
|8||Lasagne||L, M||Py||3266||906||62||CNN, RNN, LSTM|
|9||MCT||W||C++||12817||3326||150||CNN, DBN, RNN, LSTM|
|10||MXNet||L, M, W, A, I||C++||11727||4325||437||DA, CNN, RNN, LSTM|
|11||Neon||L, M||Py||3271||723||69||DA, CNN, RNN, LSTM|
|12||PyTorch||L, M||Py||8464||1762||330||CNN, RNN, LSTM|
|13||TensorFlow||L, M, W||Py, C++||74463||36781||1100||CNN, RNN, RBM, LSTM|
|14||TF.Learn||L, M||Py, C++||6916||1513||107||CNN, BRNN, RNN, LSTM|
|15||Theano||L, M, W||Py||7171||2319||323||CNN, RNN, RBM, LSTM|
|16||Torch||L, M, W, A, I||Lu, C, C++||7387||2174||134||CNN, RNN, RBM, LSTM|
|17||Veles||L, M, W, A||Py||840||185||8||DA, CNN, RNN, RBM, LSTM|
*GitHub parameters (as of 25 Oct. 2017); Apache2 License; BSD License; MIT License;
Legends: L–Linux/Unix; M–MacOSX; W–Windows; A–Android; I–iOS; CP–Cross-platform; Py–Python; Ja–Java; Lu–Lua; Ma–Matlab.
4.1 Apache Singa
Known as Singa (https://singa.incubator.apache.org/), it is a distributed DL platform written in C++, Java, and Python. It’s flexible architecture allows synchronous, asynchronous, and hybrid training frameworks to run. It supports a wide range of DL architectures including CNN, RNN, RBM, and DBM.
) is scalable, written in C++ and provides bindings for Python as well as Matlab. Dedicated for experiment, training, and deploying general purpose DL models, this framework allows switching between development and deployment platforms. Targeting computer vision applications, it is considered as the fastest implementation of the CNN.
Chainer (http://chainer.org/) is a DL framework provided as Python library. Besides the availability of popular optimization techniques and NN related computations (e.g., convolution, loss, and activation functions), dynamic creation of graphs makes Chainer powerful. It supports a wide range of DL architectures including CNN, RNN, and DA.
Deeplearning4j (DL4J, https://deeplearning4j.org/), written in Java with core libraries in C/C++, is a distributed framework for quick prototyping that targets mainly nonresearchers. Compatible with JVM supported languages (e.g., Scala/Clojure), it works on distributed processing frameworks (e.g., Hadoop and Spark). Through Keras (section 4.7) as a Python API, it allows importing existing DL models from other frameworks. It allows creation of NN architectures by combining available shallow NN architectures.
The DyNet library (https://dynet.readthedocs.io/), written in C++ with Python bindings, is the successor of ‘C++ neural network library’. In DyNet, computational graphs are dynamically created for each training example, thus, it is computationally efficient and flexible. Targeting NLP applications, its specialty is in CNN, RNN, and LSTM.
HO (www.h2o.ai) is an ML software that includes DL and data analysis. It provides a unified interface to other DL frameworks like, TensorFlow, MXNet, and Caffe. It also supports training of DL models (CNN and RNN) designed in R, Python, Java, and Scala.
Lasagne (http://lasagne.readthedocs.io) DL library is built on top of Theano. It allows multiple input, output, and auxiliary classifiers. It supports user defined cost functions and provides many optimization functions. Lasagne supports CNN, RNN, and LSTM.
4.9 Microsoft Cognitive Toolkit
Replacing CNTK, the Microsoft Cognitive Toolkit (MCT, https://cntk.ai/) is mainly coded in C++. It provides implementations of various learning rules and supports different DL architectures including DNN, CNN, RNN, and LSTM.
Neon (www.nervanasys.com/technology/neon/) is a DL framework written in Python. It provides implementations of various learning rules, along with functions for optimization and activation. Its support for DL architecture includes CNN, RNN, LSTM, and DA.
PyTorch (http://pytorch.org/) provides Torch modules in Python. More than a wrapper, its deep integration allows exploiting the powerful features of Python. Inspired by Chainer, it allows dynamic network creation for variable workload, and supports CNN, RNN and LSTM.
), written in C++ and Python, is developed by Google and supports very-large-scale deep NN. Amended recently as ‘TensorFlow Fold’, its capability to dynamically create graphs made the architecture flexible, allowing deployment to a wide range of devices (e.g., multi-CPU/GPU desktop, server, mobile devices, etc.) without code rewriting. Also contains a data visualization tool named TensorBoard and supports many DL architectures including CNN, RNN, LSTM, and RBMs.
TF.Learn (www.tflearn.org) is a TensorFlow (section 4.13) based high level Python API. It supports fast prototyping with modular NN layers and multiple optimizers, inputs, and outputs. Supported DL architectures include CNN, BRNN, and LSTM.
) is a Python library that builds on core packages like NumPy and SymPy. It defines, optimizes, and evaluates mathematical expressions with tensors, and served as foundation for many DL libraries.
Started in 2000, Torch (http://torch.ch/), a ML library and scientific computing framework, has evolved as a powerful DL library. Core functions are implemented in C and the rest via LuaJIT scripting language made Torch super fast. Software giants like Facebook and Google use Torch extensively. Recently Facebook’s DL modules (fbcunn) focusing on CNN have been open-sourced as a plug-in to Torch.
Veles (https://velesnet.ml/) is a Python based distributed platform for rapid DL application development. It provides machine learning and data processing services and supports IPython notebooks. Developed by Samsung, one of its advantages is that, it supports OpenCL for cross-platform parallel programming, and allows execution across heterogenous platforms (e.g., servers, PC, mobile, and embedded devices). The supported DL architectures include– DA, CNN, RNN, LSTM, and RBM.
5 Relative Comparison of DL Tools
To perform relative comparison among the available open-source DL tools, we selected four assessing measures for the tools which are detailed below: trend in their usage, community participation in their development, interoperability among themselves, and their scalability (see Fig. 6).
To assess the popularity and trend of the various DL tools among the DL consumers, we looked into two different sources to assess the utilization of the tools. Firstly, we extracted globally generated search data from Google Trends111https://trends.google.com/ for two years (July 2015 to June 2017) related to search terms consisting of ⟨[tool name] + Deep Learning⟩. The data showed a progressive increase of search about Tensorflow since it’s release followed by Keras (see Fig. 6a). Secondly, mining the content of around 2,000 papers submitted to arXiv’s cs.[CVCLLGAINE], and stat.ML categories, during the month of March 2017, for the presence of the tool names [karpathy_peek_2017]. As seen in Fig. 6b which shows an weighted percentage of each individual tool’s mention in the papers, the top 6 tools were identified as: Tensorflow, Pytorch, Caffee, Keras, Torch, and Theano.
The community based development score for each tool discussed in Section 4 was calculated from repository popularity parameters of GitHub (https://github.com/) (i.e., star, fork, and contributors). The bubble plot shown in Fig. 6c depicts community involvement in the development of the tools indicating the year of initial stable release. Each bubble size in the figure, pertaining to a tool, represents the normalized combined effect of fork and contributors of that tool. It is clearly seen that a very large part of the community effort is concentrated on Tensorflow, followed by Keras and Caffe.
In today’s cross-platform development environments, an important measure to judge a tool’s flexibility is it-s interoperability with other tools. In this respect, Keras is the most flexible one whose high-level neural networks are capable of running on top of either Tensor or Theano. Alternatively, DL4j-model imports neural network models originally configured and trained using Keras that provides abstraction layers on top of TensorFlow, Theano, Caffe, and CNTK backends (see Fig. 6d).
Hardware based scalability is an important feature of the individual tools (see Fig. 6e). Today’s hardware for computing devices are dominated by graphics processing units (GPUs) and central processing units (CPUs). But considering increased computing capacity and energy efficiency, the coming years are expected to witness expanded role for other chipset types including application specific integrated circuits (ASICs), and field programmable gate arrays (FPGAs). So far DL has been predominantly used through software. Requirement for hardware acceleration, energy efficiency, and higher performance allowed development of chipset based DL systems.
6 Performance of Tools and Benchmark
The power of DL methods lie in their capability to recognize patterns for which they are trained. Despite the availability of several accelerating hardware (e.g., multicore [C/G]PUs), this training phase is very time consuming, cumbersome, and computationally challenging. Moreover, as each tool provides implementations of several DL architectures and often emphasizing separate components of them on different hardware platforms, selecting an appropriate tool suitable for an application is getting increasingly difficult. Besides, different DL tools have different targets, e.g., Caffe aims applications, whereas, Torch and Theano are more for DL research. To facilitate the scientists in picking the right tool for their application, a handful of scientists benchmarked the performances of the popular tools concerning their training times [bahrampour_dl_fws_2016, shi_benchmarking_dl_2016]. Moreover, to the best of our knowledge, there exist two main efforts that provide the benchmarking details of the various DL tools and frameworks publicly [deepmark_benchmark_2017, hk_benchmark_2017]. Summarizing those seminal works, below we provide the time required to complete the training process as a performance measure of four different DL architectures (e.g., FCN, CNN, RNN, and DA) among the popular tools (e.g., Caffe, CNTK, MXNET, Theano, Tensorflow, and Torch) on multicore [C/G]PU platforms.
|1||CPU: E5-1650 @ 3.50 GHz||32 GB|
|GPU: Nvidia GeForce GTX Titan X|
|2||CPU: E5-2630 @ 2.20 GHz||128 GB|
|GPU: Nvidia GeForce GTX 980|
|GPU: Nvidia GeForce GTX 1080|
|GPU: Tesla K80 accelerator with GK210 GPUs|
|3||CPU: E5-2690 @ 2.60 GHz||256 GB|
|GPU: Tesla P100 accelerator|
|GPU: Tesla M40 accelerator|
|GPU: Tesla K80 accelerator with GK210 GPUs|
Legends: ESN: Experimental Setup Numbers; : Intel Xeon CPU v2; : 3072 cores, 1000 MHz base clock, 12 GB memory; : Intel Xeon CPU v4; : 2048 cores, 1126 MHz base clock, 4 GB memory; : 2560 cores, 1607 MHz base clock, 8 GB memory; : Tesla K80 accelerator has two Tesla GK210 GPUs with 2496 cores, 560 MHz base clock, 12 GB memory; : 3584 cores, 1189 MHz base clock, 16 GB memory; : 3072 cores, 948 MHz base clock, 12 GB memory.
Table 6 lists the experimental setups used in benchmarking the specified tools. Mainly three different setups, each with Intel Xeon E5 CPU, were utilized during the process. Though the CPU were similar, the GPU hardware were different: GeForce GTX Titan X, GTX 980, GTX 1080, Tesla K80, M40, and P100.
Stacked autoencoders or DA were benchmarked using the experimental setup number 1 in Table 6
. To estimate the performance of the various tools on implementing DA, three autoencoders (number of hidden layers: 400, 200, and 100, respectively) were stacked with tied weights and sigmoid activation functions. A two step network training was performed on the MNIST dataset[lecun_mnist_1998]. As reported in Fig. 7 (a, b), the performances of various DL tools are evaluated using forward runtime and training time. The forward runtime refers to the required time for evaluating the information flow through the full network to produce the intended output for an input batch, dataset, and network. In contrast, the gradient computation time measures the time that required to train DL tools. The results suggest that, regardless of the number of CPU threads used or GPU, Theano and Torch outperforms Tensorflow both in gradient and forward times (see Fig. 7 a, b).
Experimental setup number 2 (see Table 6) was used in benchmarking RNN. The adapted LSTM network [zaremba_rnn_2014] was designed with 10000 input and output units with two layers and 13 millions parameters. As the performance of RNN depends on the input length, an input length of 32 was used for the experiment. As the results indicate (see Fig. 7 c-f), MCT outperforms other tools on both CPU and all three GPU platforms. On CPUs, Tensorflow performs little better than Torch (see Fig. 7 c). On GPUs, Torch is the slowest with Tensorflow and MXNet performing similarly (see Fig. 7 d-f).
Still a large portion of the pattern analysis is done using CNN, therefore, we further focused on CNN and investigated how the leading tools performed and scaled in training different CNN networks in different GPU platforms. Time speedup of GPU over CPU is considered as a metric for this purpose. The individual values are calculated using the benchmark scripts of DeepMark [deepmark_benchmark_2017] on experimental setup number 3 (see Table 6) for one training iteration per batch. The time needed to execute a training iteration per batch equals the time taken to complete a forward propagation operation followed by a backpropagation operation. Figure 8 summarizes the training time per iteration per batch for both CPU and GPUs (left y-axis), and the corresponding GPU speedup over CPU (right y-axis).
These findings for four different CNN network models (i.e., Alexnet [krizhevsky_alexnet_2012], GoogLeNet [szegedy_googlenet_2015], Overfeat [sermanet_overfeat_2013], and VGG [simonyan_vgg_2014]) available in four tools (i.e., Caffe, Tensorflow, Theano, and Torch) [murphy_benchmark_2017] clearly suggest that network training process is much accelerated in GPUs in comparison to CPUs. Moreover, another important message is that, all GPUs are not the same and all tools don’t scale up at the same rate. The time required to train a neural network strongly depends on which DL framework is being used. As for the hardware platform, the Tesla P100 accelerator provides the best speedup with Tesla M40 being the second and Tesla K80 being the last among the three. In CPUs, TensorFlow achieves the least training time indicating a quicker training of the network. In GPUs, Caffe usually provides the best speedup over CPU but Tensorflow and Torch perform faster training than Cafee. Though Tensorflow and Torch have similar performances (indicated by the height of the lines), Torch slightly outperforming Tensorflow in most of the networks. Finally, most of the tools outperform Theano.
7 Open Issues and Future Perspectives
Brain has the capability to recognize and understand patterns almost instantaneously. Since last several decades, scientists have been trying decode the biological mechanism of natural pattern recognition takes place in the brain and translate that principle in AI systems. The increasing knowledge about the brain’s information processing policies enabled this analogy to be adopted and implemented in computing systems. Recent technological breakthroughs, seamless integration of diverse techniques, better understanding of the learning systems, declination of computing costs, and expansion of computational power empowered computing systems to reach human level intelligence in certain scenarios [mnih_human-level_2015]. Nonetheless, many of these methods require improvements in order not to fall short in situations they fail at present. In this line, we identify below shortcomings and bottlenecks of the popular methods, open research questions and challenges, and outline possible directions which requires attention in the near future.
First of all, DL methods usually require large datasets. Though the computing cost is declining with increasing computational power and speed, it is not worthy to apply the DL methods in cases of small to moderate sized datasets. Besides, considering that many of the DL methods perform continuous geometric transformations of one data manifold to another with an assumption that there exist learnable transfer functions which can perform the mapping [chollet_dl_lim_2017]. However, in cases when the relationships among the data are causal or very complex to be learned by the geometric transformations, the DL methods fail regardless the size of the dataset [zenil_causal_reprogramming_2017]. Also, interpreting high level outcomes of DL methods are difficult due to inadequate in-depth understanding of the DL theories which causes many of such models to be considered as ‘Black box’ [shwartz-ziv_bb_dnn_2017]. Moreover, like many other ML techniques, DL is also susceptible to misclassification [nguyen_dl_fool_2015] and over-classification [szegedy_ipnn_2014].
Additionally, harnessing full benefits offered by the open access data repositories, in terms of data sharing and re-use, are often hampered by the lack of unified reporting data standards and non-uniformity of reported information [baker_standardizing_data_2013]. Data provenance, curation, and annotation of these biological big data is a huge challenge too [wittig_data_2017].
Furthermore, except very few large enterprises, the power of distributed and parallel computation through cloud computing remained unexplored for the DL techniques. Due to the fact that the DL techniques require retraining for different datasets, repeated training becomes a bottleneck for cloud computing environments. Also, in such distributed environments, data privacy and security concerns are still prevailing [mahmud_soa_2012], and real-time processing capability of experimental data is underdeveloped [mahmud_webqst_2014].
To mitigate the shortcomings and address the open issues, the existing theoretical foundations of the DL methods need to be improved. The DL models are required not only to be able to describe specific data but also generalize them on the basis of experimental data which is crucial to quantify the performances of individual NN models [angelov_dl_challenges_2016]. These improvements should take place in several directions and address issues like– quantitative assessment of individual model’s learning efficiency and associated computational complexity in relation to well defined parameter tuning strategies, the ability to generalize and topologically self-organize based on data-driven properties. Also, to facilitate intuitive and less cumbersome interpretation of the analysis results, novel tools for data visualization should be incorporated in the DL frameworks.
Recent developments in combined methods pertaining to deep reinforcement learning (deep RL) have been popularly applied to many application domains (for a review on deep RL, see [arulkumaran_deep_rl_2017]). However, deep RL methods have not yet been applied to biological pattern recognition problems. For example, analyzing and aggregating dynamically changing patterns in biological data coming from multiple levels could help to remove data redundancy and discover novel biomarkers for disease detection and prevention. Also, novel deep RL methods are needed to reduce the currently required large-set of labeled training data.
Renewing efforts are required for standardization, annotation, curation, and provenance of data and their sources along with ensuring uniformity of information among the different repositories. Additionally, to keep up with the rapidly growing big data, powerful and secure computational infrastructures in terms of distributed, cloud, and parallel computing tailored to such well-understood learning mechanisms are badly needed. Lastly, there are many other popular DL tools (e.g., Keras, Chainer, Lasagne) and architectures (e.g., DBN) which need to be benchmarked providing the users with a more comprehensive list to choose. Also, the currently available benchmarks are mostly performed on non biological data, and their scalability to biological data aren’t very well, thus, specialized benchmarking on biological data are needed.
The biological big data coming from different application domains are multimodal, multidimentional, and complex in nature. At present, a great deal of such big data are publicly available. The affordable access to these data came with a huge challenge to analyze patterns in them which require sophisticated ML tools to do the job. As a result, many ML based analytical tools have been developed and reported over the last decades and this process has been facilitated greatly by the decrease of computational costs, increase of computing power, and availability of cheap storage. With the help of these learning techniques, machines have been trained to understand and decipher complex patterns and interactions of variables in biological data. To facilitate a wider dissemination of DL techniques applied to biological big data and serve as a reference point, this article provides a comprehensive survey of the literature on those techniques’ application on biological data and the relevant open access data repositories. It also lists existing open source tools and frameworks implementing various DL methods, and compares these tools for their popularity and performance. Finally, it concludes by pointing out some open issues and proposing some future perspectives.