DeepIso: A Deep Learning Model for Peptide Feature Detection

12/09/2017
by   Fatema Tuz Zohora, et al.
University of Waterloo
0

Liquid chromatography with tandem mass spectrometry (LC-MS/MS) based proteomics is a well-established research field with major applications such as identification of disease biomarkers, drug discovery, drug design and development. In proteomics, protein identification and quantification is a fundamental task, which is done by first enzymatically digesting it into peptides, and then analyzing peptides by LC-MS/MS instruments. The peptide feature detection and quantification from an LC-MS map is the first step in typical analysis workflows. In this paper we propose a novel deep learning based model, DeepIso, that uses Convolutional Neural Networks (CNNs) to scan an LC-MS map to detect peptide features and estimate their abundance. Existing tools are often designed with limited engineered features based on domain knowledge, and depend on pretrained parameters which are hardly updated despite huge amount of new coming proteomic data. Our proposed model, on the other hand, is capable of learning multiple levels of representation of high dimensional data through its many layers of neurons and continuously evolving with newly acquired data. To evaluate our proposed model, we use an antibody dataset including a heavy and a light chain, each digested by Asp-N, Chymotrypsin, Trypsin, thus giving six LC-MS maps for the experiment. Our model achieves 93.21 results demonstrate that novel deep learning tools are desirable to advance the state-of-the-art in protein identification and quantification.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/02/2019

Peak Alignment of GC-MS Data with Deep Learning

We present ChromAlignNet, a deep learning model for alignment of peaks i...
06/01/2020

Multiple Sclerosis disease: a computational approach for investigating its drug interactions

Multiple Sclerosis (MS) is a chronic and potentially highly disabling di...
10/08/2017

Protein identification with deep learning: from abc to xyz

Proteins are the main workhorses of biological functions in a cell, a ti...
06/20/2018

DeepAffinity: Interpretable Deep Learning of Compound-Protein Affinity through Unified Recurrent and Convolutional Neural Networks

Motivation: Drug discovery demands rapid quantification of compound-prot...
12/19/2017

Development and evaluation of a deep learning model for protein-ligand binding affinity prediction

Structure based ligand discovery is one of the most successful approache...
10/15/2021

A novel framework to quantify uncertainty in peptide-tandem mass spectrum matches with application to nanobody peptide identification

Nanobodies are small antibody fragments derived from camelids that selec...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The outstanding performance of deep learning on object recognition opens a new frontier in the domain of bioinformatics. As a continuation of that, our research on solving peptide feature detection problem using Convolutional Neural Network (CNN) is the first attempt as per our knowledge. The use of CNN in image processing is pioneered by Yann LeCun et al. [1]

in 1998, for the hand written digit recognition. However, CNN became popular after the revolutionary breakthrough in 2012 ImageNet 

[2] object recognition competition111https://www.technologyreview.com/s/530561/the-revolutionary-technique-that-quietly-changed-machine-vision-forever/.

On the other hand, proteomics based on liquid chromatography with tandem mass spectrometry (LC-MS/MS) is a well established technology for discovery of disease biomarkers, drug target identification, mode of action (MOA) studies and safety marker identification in drug research [3]. Protein identification and quantification are fundamental tasks in proteomics and peptides are the building block of protein. Therefore, the typical analysis workflows of LC-MS/MS data include peptide feature detection and quantification from an LC-MS map, peptide identification from MS/MS spectra, and protein profiling [4, 5, 6]. The first step, peptide feature detection and quantification from an LC-MS map is our target problem. The LC-MS map of a protein sample is a 3D plot where the three dimensions are: mass-to-charge (

) or Da, retention time (RT), and intensity of peptide ions in that sample. Peptide feature is a multi-isotope pattern formed by different molecular isotopes, e.g. carbon-12 and carbon-13, of the same peptide. Detecting multi-isotope patterns in LC-MS map is a challenging task due to the overlapping peptides, several charges of the same molecule, and intensity variation. Moreover, a single LC-MS map may have gigapixel dimension containing thousands to millions of peptide features. However, CNN is found to be effective in similar pattern recognition problems, for example, in detecting cancer metastasis on gigapixel pathology images by Liu et al. 

[7]. Therefore, to address our target problem, we propose a new model DeepIso, based on CNN, that slides a window detector over the LC-MS map to spot multi-isotope patterns. The goal is to detect peptide features along with their charge states, and estimate their intensities.

Latest advanced types of LC-MS technologies generate huge amounts of analytical data with high scan speed, high accuracy and resolution, which is almost impossible to interpret manually. Existing methods to automate this data handling applies different heuristics and none of them relies on deep learning to find out the appropriate parameters automatically from the available LC-MS data. For example, in MaxQuant 

[8], peaks (component of a peptide feature) are detected by fitting a Gaussian peak shape, and then the peptide feature is found by employing a graph theoretical data structure. AB3D [3] first roughly picks all local maxima whose intensity is larger than a given threshold as candidate peaks from the entire LC-MS map. Then applies an iterative algorithm to process neighboring peaks of each candidate peak, to form peptide feature. Their recall varies from 0.35 to 0.85, and precision varies from 0.14 to 0.53 based on different datasets. MSight [9] generates images from the raw MS data file for adapting the image-based peak detection. CentWave [10]

uses a pre-scan to first identify regions of interest composed of centroids and then the centroids are collapsed into a one-dimensional chromatogram, and wavelet-based curve fitting is performed to separate closely eluting peaks. Their F-score varies from 55% to 85% based on different dataset. TracMass 

[11] and Massifquant [12]

uses a 2D Kalman Filter (KF) to find peaks in highly complex samples. Massifquant’s sensitivity varies from 75% to 90% and specificity varies from 80% to 100% based on different datasets.


In most of the existing algorithms, many parameters are set based on experience with empirical experiments, whose different settings may have a large impact on the outcomes. In contrast to these existing works, our research aims at systematically training CNN using real dataset to automatically learn all characteristics of the data, without human intervention. Last but not least, even if the model makes wrong predictions, the correct results can be put back as new training data so that the model can learn from its own mistakes. We believe that such models shall have superior performance over existing techniques and shall become the method of choice in the near future.

2 Method

We explain our method using a block diagram as shown in Figure 1. The method can be divided into three steps. First step is to train the CNN, second step is to scan the test LC-MS map to detect the peptide features using the trained model in first step, and third step is to process those detections to produce a list of peptide features. First two steps involve deep learning and the third step applies heuristics. Testing phase involves Step 2 and Step 3. In the following sections we discuss each of the steps in detail.

Figure 1: Block diagram of our proposed method to detect peptide features from LC-MS map of protein sample

2.1 Step 1: Training The CNN

The intuition of the first step is that, CNN takes a dimension image (cut from training LC-MS maps) as input and outputs ‘Yes’ or ‘No’, based on whether it sees a peptide feature in the image or not. If the output is ‘Yes’, then the CNN also needs to identify the charge of that feature. The can have values from 1 to 9. Therefore we use as an indication of ‘No’ feature is seen in the input image. So this training step is basically a supervised 10 category classification problem using CNN, where each category is a charge 0 to 9. In this step, CNN is supposed to learn following basic properties of peptide feature [13], besides many other hidden characteristics from the training data.

  1. In the LC-MS map, the isotopes in a peptide feature are equidistant along axis. For charge 1 to 9, the isotopes are respectively 1.00 , 0.5 , 0.33 , 0.25 , 0.17 , 0.14 , 0.13 , and 0.19 distance apart from each other.

  2. The intensities of the isotopes form bell shape within their retention time (RT) range.

  3. The isotope having highest intensity in a peptide feature is called precursor ion and usually that is the first isotope in a feature. For instance, isotope 1 listed in the feature table shown in Figure 1.

  4. Peptide features often overlap with each other.

Training Data Generation

We use the dataset WIgG1 of monoclonal antibody sequence including a light chain and a heavy chain [14] to perform the experiment. It was generated from the LC-MS/MS analysis of the Intact mAb Mass Check Standard purchased from Waters. It is an intact mouse antibody purified by Protein-A with known molecular weights and amino acid sequences of both the light and heavy chains. Since each chain is digested by Asp-N, Chymotrypsin, and Trypsin, therefore we have in total six LC-MS maps for doing the experiment222The RAW files of the antibody dataset can be downloaded from the database MassIVE with accession number MSV000079801. We produce a list of peptide features from each map using PEAKS Studio333http://www.bioinfor.com/peaks-studio/, and consider that as our ground truth.

We apply 6-fold cross validation on these six LC-MS maps. Each time we keep one map for testing and the remaining for the training. Each map holds about 20,000 peptide features. We consider resolution of 0.01 along the horizontal axis, and 0.01 minute along the vertical axis. Based on this resolution, each LC-MS map has dimension of around pixels. We scale the pixel intensities in each map from 0 to 255. For clarification please refer to the LC-MS map shown in Step 2 in Figure 1. We consider the features having charge 1 to 9 as positive samples and as negative samples (no feature present). We cut the features from the training LC-MS maps, considering a block/window size of . In Appendix A1 we discuss why this block size is good enough to cover a feature so that CNN can take decision about the existence and charge of a feature.

We cut the positive features so that the feature is placed at pixel of the block as shown in Figure 1(a), because some features have wider isotopes, for instance, the bottom right feature shown in the figure. The amount of features having charge is higher than other positive samples in the dataset. Therefore to make the dataset balanced, we add some synthetic data for charge 5 to 9. The procedure of synthetic data generation is explained in Appendix A4. To generate negative samples, we select some features and cut blocks from its surrounding region satisfying the condition that NO feature starts within the to pixels of the block as shown in Figure 1(b).

(a)
(b)
Figure 2: Generation of training data: (a) Generate positive samples by placing a [15 x 211] window over the feature, such that, first isotope of the feature is centered at [0,6] pixel of the window; (b) Generate negative samples by translating the window around the features, such that, NO feature starts within the to pixels of the window

In this way we cut about 55,000 positive samples and about 90,000 negative samples for each fold, giving about 1.4 million features for training where the percentage of features having 0 to 9 is about 62%, 7%, 10%, 9%, 4%, 5%, 2%, 0.5%, 0.2%, 0.1% respectively. We select 20% of them for validation such that, validation dataset does not contain synthetic positive samples. The ratio of negative samples is kept higher, because the LC-MS map is very sparse and most of the spaces hold no feature.

Deep Learning Model Design

The architecture of our convolution neural network is shown in Figure 3.

Figure 3: Architecture of our proposed Convolutional Neural Network

In order to detect the sharp boundary and location of the peptide features, we want the CNN to have the property ‘equivariant to translation’ (ensured by CNN filters) to generalize edge, texture, shape detection in different locations, but not ‘invariant to translation’ (ensured by pooling layers) that causes the precise location of the detected features to matter less

. Therefore we avoid using pooling layer. We use Google developed Tensorflow library to implement our model. We apply stochastic optimization using Tensorflow provided ‘AdamOptimizer’ with learning rate of

 [15]

. We use the rectified linear unit (ReLU) as activation function of the neurons and sparse softmax cross entropy as error function at the output layer

444https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits

. We add dropout layer after final convolution layer and fully connected layer with a value of 0.50, which increases the validation accuracy by 1.5%. Minibatch size is considered 128 to ensure enough weight update in each epoch. We check the accuracy on validation set after training on each 10 minibatches. We perform data shuffling after each epoch which helps to achieve faster convergence. Our CNN converges within 30 epochs.

2.2 Step 2: Scan the Test LC-MS Map

The second step starts the testing phase where the CNN trained in step 1, is given a LC-MS map of some sample. As shown in Figure 1, it scans the whole map in a sliding window fashion, pixel by pixel, in column major order from bottom to top. At each coordinate of the map, a window or block of dimension starting at is fetched and fed as input image to the CNN. The CNN produces output 0 to 9 indicating whether it sees any feature starting at that coordinate or not. We keep nine hash tables for recording the detection coordinate of 1 to 9 classes of features during the scan. The values (represented by coordinate) of features are used as the key of these hash tables, and the RT ranges of the isotopes in a feature (represented by coordinate) are inserted as values under these keys. For example, in Figure 1, in Step 2, a feature is detected which starts at , and first isotope has RT range from to . Interested readers are requested to have a look at the details of scanning procedure in Appendix A2.

2.3 Step 3: Produce a List of Detected Peptide Features

In this step, we process the hash tables (resulting from Step 2) using some heuristics designed based on common peptide feature properties to produce a complete list of peptide features showing the , and RT range of each isotopes and intensity of the feature as shown in Figure 1, in Step 3. For simplicity, we skip explaining the detailed procedure of processing the hash tables and include that later in Appendix A3.

To clarify the intuition of test phase (Step 2 and Step 3) we show a counter example in Figure 4 where a small region of LC-MS map holding a feature with charge is shown (in (A)) and the result of CNN detection after scanning this region (in (B)) and corresponding records in hash table (in (C)) are shown as well. Records presenting the feature is further shown using a RT vs plot (in (D)). After applying Step 3, the peptide feature is listed as shown in feature table (in (E)).

Figure 4: Visualization of Step 2 and Step 3: (A) Sample LC-MS map, (B) CNN detection in Step 2 by scanning over sample LC-MS map, (C) Corresponding records in hash table after the scanning, (D) RT vs plot for the detected multi isotope pattern, (E) In Step 3, the pattern is listed as a peptide feature in the feature table

3 Result

Unlike other related research works, we do not use any kind of filter to remove noises from LC-MS map before detecting features. Because we want to see how well CNN learns to avoid noises itself and detects the peptide features. Therefore, we perform all our experiments on raw MS data.

We apply a 6-fold cross validation on six LC-MS maps. Each time we keep one map for testing and the remaining are used for training. In each fold we run the training three times and choose the model giving the best validation accuracy for the 10 category classification problem. The average training and validation accuracy of the six folds is 94.44% and 96.08% respectively. The class accuracy is presented in Table 1.

charge Training Accuracy (%) Validation Accuracy (%)
0 99.87 99.82
1 86.37 81.80
2 93.76 90.73
3 96.99 94.58
4 97.12 94.01
5 56.00 95.27
6 47.87 78.02
7 53.11 87.02
8 46.04 46.07
9 45.02 45.32
Table 1: Average training and validation accuracy of six folds for each class

Although we use all the peptide features in training LC-MS maps for the training, however, we may not find De Novo or Database sequence for all of them. The existence of a peptide feature is supposed to be 100% accurate if the peptide feature has a De Novo or Database sequence. The existence of other features is not wrong. However, since there is no ground truth, we do not know how accurate we are about their existence. Therefore, for the evaluation of testing phase, we arrange the features present in test LC-MS map into following two types:

  • Type : All features

  • Type : Only those features for which we can find De Novo or Database sequence.

In testing phase, we would say a feature is detected by our proposed model DeepIso, if the feature profile reported by our method matches with the feature profile provided by PEAKS. To be specific, we compare following three points to decide whether a feature is detected.

  1. If the charge of a feature is matched.

  2. The value of the starting isotope in a feature reported by DeepIso matches with that of PEAKS result within tolerance level of 0.05 or Da. If the first one matches, then the remaining isotopes’ match as well, since they are equidistant.

  3. The RT range of a feature reported by our method overlaps with that of PEAKS result and The RT value giving the highest peak intensity of a feature matches with a tolerance level of 0.5 minute.

Sensitivity and Specificity of DeepIso on Test LC-MS Map

Just like other existing works in the literature, we evaluate the True Positive rate of our model on test LC-MS map using the metric sensitivity or recall. We denote the test LC-MS map in six folds as ‘A’, ‘B’, ‘C’, ‘D’, ‘E’, and ‘F’ respectively. We report the sensitivity for each fold and the average sensitivity as well for data type in Table 2, and for data type , in Table 3.

charge A B C D E F Average
1 85.65 81.56 87.35 85.07 83.68 85.01 84.72
2 90.84 88.68 88.85 90.80 89.17 88.10 89.41
3 90.09 88.17 86.29 88.73 87.34 85.81 87.74
4 88.53 86.66 84.90 89.66 86.80 83.70 86.71
5 91.04 91.74 89.79 90.16 94.40 85.51 90.44
6 71.79 78.49 73.08 82.99 78.71 64.89 74.99
7 70.97 76.00 74.29 90.00 76.67 75.00 77.15
8 25.81 28.57 42.86 37.08 15.38 16.67 27.73
9 31.58 50.00 25.00 2.56 16.67 0 20.97
over all charges 88.89 87.09 87.38 88.36 87.55 86.38 87.61
Table 2: Sensitivity (%) on test LC-MS map in six folds for type data
charge A B C D E F Average
1 No data 50 100 75.00 100 75 80.00
2 95.37 95.00 93.21 94.68 94.58 93.80 94.44
3 92.13 94.53 88.24 92.59 92.68 92.47 92.11
4 91.11 92.31 84.48 91.14 91.88 84.73 89.28
5 95.74 100 91.67 92.31 95.96 84.43 93.35
6 90.48 100 50 90.57 95.74 87.50 85.71
7 No data No data No data No data No data No data No data
8 No data No data No data No data No data No data No data
9 No data No data No data No data No data No data No data
over all charges 93.98 94.73 91.06 93.48 93.75 92.26 93.21
Table 3: Sensitivity (%) on test LC-MS map in six folds for type data

Detection of features having higher intensity is important in the workflow of LC-MS/MS analysis [10]. We prepare statistics of the model sensitivity under different intensity range (in terms of Area Under Curve (AUC)), starting from 0 to (maximum intensity in all six LC-MS maps lie within this range). The average result of 6-fold cross validation is presented in Table 4 for data type and data type . Our CNN performs well when intensity is higher, as expected.

Intensity Range Type Type
73 77.25
73.61 80.06
81.63 83.82
88.52 93.04
92.24 94.03
94.02 95.13
96.04 95.93
99.24 100.00
Table 4: The average sensitivity (%) of 6-fold cross validation under different AUC range

In order to evaluate the model in terms of False Positives, some of the related works in literature use the metric precision instead of specificity because in their methods there is no way of defining True Negative cases. However, in some works people have defined True Negative cases based on their processing strategy. For example, Conley et al. [12]

define True Negative cases in terms of unused centroids, and report their model specificity. Since our model finds the features in LC-MS map by scanning it using CNN, therefore the evaluation metric

specificity seems more appropriate to measure the performance of CNN. We use a counter example to explain how we define True Negatives in terms of pixels. Let us consider CNN detections after scanning over a small area of LC-MS map having dimension [10 x 20] as shown in the Figure 5. Here CNN says ‘Yes’ in the pixels shown in Black and Green. In all other pixels CNN says ‘No’. Now, the Black pixels belong to two True Positive features. The Green pixels represent a False Positive feature and the Red pixels represent a False Negative feature. The other White pixels represent True Negative cases. According to the figure, we can calculate the specificity in this region as follows:

specificity
Figure 5: Calculation of specificity

In this way the average specificity of 6-fold cross validation of our model is 99.44%.

The Correlation of Peptide Intensity Between PEAKS and DeepIso

For the statistical analysis of biological experiments the feature intensity is of interest and has to be calculated from the raw data [10]. The technique is to first apply curve fitting over the bell shaped intensities of isotopes in a feature. Then the Area Under Curve (AUC) of all isotopes in a feature are calculated and added to get the intensity or AUC of that feature. The Pearson correlation of feature intensity calculated in our model and PEAKS is 0.78 for type data, and 0.83 for type data (average of 6-fold cross validation).

4 Discussion

First, we would like to discuss our observations on model sensitivity as listed below.

  • The class sensitivity varies from each other as shown in Table 2 and Table 3, due to the imbalanced dataset, as reported in Table 5. In this table we see the distribution of features having different charges in LC-MS map ‘D’, for both type and type .

  • The proportion of type features having charge 1 in test LC-MS map is usually very low, as shown in Table 5. As a result, sensitivity of charge 1 shown in In Table 3 does not reflect how well CNN learns to detect them. We plan to test on LC-MS map having more type charge 1 features to evaluate the model in future.

  • Sensitivity of type data is usually higher than type , as reported in Table 2 and Table 3. Sensitivity of type data is of more significance, because type data may contain some wrongly detected features (since PEAKS accuracy is around 95%). However, the type data is supposed to be 100% accurate, since they have De Novo or Database sequence identified. Therefore, the higher sensitivity of our model for type data indicates proper learning by the CNN model. So the average sensitivity of six folds for type data 93.21% is considered as our model sensitivity.

charge Type Type
1 17.46 0.18
2 39.36 53.43
3 27.78 30.53
4 7.67 9.76
5 3.43 3.75
6 2.31 2.35
7 1.10 0
8 0.61 0
9 0.27 0
Table 5: Distribution (%) of features of different charges in LC-MS map ‘D’, for both data types

Next, we point out some limitations in our DeepIso model and propose potential solutions as well. Based on our observations, there are two limitations with the scanning process in Step 2. First one is, the last isotope in a feature may not be recorded at all. The second one is, the ending RT values of isotopes in a feature in LC-MS map may not be recorded correctly. The reasons of these limitations are discussed in Appendix A2. Since the intensity of last isotope and around the ending trail of an isotope is usually very low, these limitations do not affect significantly in intensity (AUC) calculation. However, we propose some potential solutions to overcome these problems:

  • To solve the first problem, we can use another CNN that is trained to scan the LC-MS map from right to left. As a result CNN would make entry to the features from the last isotope, and would be able to detect the last isotope first.

  • The second problem can be solved by flipping the LC-MS map along horizontal axis and let CNN scan again following usual techniques. As a result it can detect the ending RT values of the isotopes first.

Now, we would need to combine these new detections with the old one to produce the final list of features. Another problem, the high tolerance level, might be reduced by considering 0.001 resolution along both axis and RT.

Finally, we discuss some potential future scopes. In our current method, we are using very basic CNN that can just output the existence of a feature along with it’s charge. But it cannot directly output the region where the feature lies. Thus we have to process CNN detections applying heuristics to produce a final list of features. Therefore it will be interesting to investigate whether we can find more suitable deep neural network that not only detects but also outputs the boundary of the feature, and let us avoid heuristics completely. Some probable solutions can be R-CNN 

[16] and Mask CNN [17]

. Besides using CNN, we intend to experiment by integrating CNN and Recurrent Neural Network (RNN) into a dynamic detector 

[18], which is more data-driven and can predict variable size patterns more accurately than a fix size filter. This type of models may also solve the limitations discussed. Therefore, our next concern is to design more suitable deep neural network that learns to detect the peptide features and also their boundary correctly in the LC-MS map without applying heuristics. After designing such a model we would like to perform a comparison with other existing software tools to evaluate the performance gain due to deep learning in terms of accuracy and efficiency. However, we believe that our current research reveals the capability of CNN in this domain more clearly and let us understand it’s limitations as well, which would help us to design more powerful deep learning model to automate peptide feature detection in future.

References

  • LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • Aoshima et al. [2014] Ken Aoshima, Kentaro Takahashi, Masayuki Ikawa, Takayuki Kimura, Mitsuru Fukuda, Satoshi Tanaka, Howell E Parry, Yuichiro Fujita, Akiyasu C Yoshizawa, Shin-ichi Utsunomiya, et al. A simple peak detection and label-free quantitation algorithm for chromatography-mass spectrometry. BMC bioinformatics, 15(1):376, 2014.
  • Steen and Mann [2004] Hanno Steen and Matthias Mann. The abc’s (and xyz’s) of peptide sequencing. Nature reviews. Molecular cell biology, 5(9):699, 2004.
  • Zhang et al. [2012] Jing Zhang, Lei Xin, Baozhen Shan, Weiwu Chen, Mingjie Xie, Denis Yuen, Weiming Zhang, Zefeng Zhang, Gilles A Lajoie, and Bin Ma. Peaks db: de novo sequencing assisted database search for sensitive and accurate peptide identification. Molecular & Cellular Proteomics, 11(4):M111–010587, 2012.
  • Sturm et al. [2008] Marc Sturm, Andreas Bertsch, Clemens Gröpl, Andreas Hildebrandt, Rene Hussong, Eva Lange, Nico Pfeifer, Ole Schulz-Trieglaff, Alexandra Zerck, Knut Reinert, et al. Openms–an open-source software framework for mass spectrometry. BMC bioinformatics, 9(1):163, 2008.
  • Liu et al. [2017] Yun Liu, Krishna Gadepalli, Mohammad Norouzi, George E Dahl, Timo Kohlberger, Aleksey Boyko, Subhashini Venugopalan, Aleksei Timofeev, Philip Q Nelson, Greg S Corrado, et al. Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442, 2017.
  • Cox and Mann [2008] Jürgen Cox and Matthias Mann. Maxquant enables high peptide identification rates, individualized ppb-range mass accuracies and proteome-wide protein quantification. Nature biotechnology, 26(12):1367–1372, 2008.
  • Palagi et al. [2005] Patricia M Palagi, Daniel Walther, Manfredo Quadroni, Sébastien Catherinet, Jennifer Burgess, Catherine G Zimmermann-Ivol, Jean-Charles Sanchez, Pierre-Alain Binz, Denis F Hochstrasser, and Ron D Appel. Msight: An image analysis software for liquid chromatography-mass spectrometry. Proteomics, 5(9):2381–2384, 2005.
  • Tautenhahn et al. [2008] Ralf Tautenhahn, Christoph Boettcher, and Steffen Neumann. Highly sensitive feature detection for high resolution lc/ms. BMC bioinformatics, 9(1):504, 2008.
  • Tengstrand et al. [2014] Erik Tengstrand, Johan Lindberg, and K Magnus Åberg. Tracmass 2 a modular suite of tools for processing chromatography-full scan mass spectrometry data. Analytical chemistry, 86(7):3435–3442, 2014.
  • Conley et al. [2014] Christopher J Conley, Rob Smith, Ralf JO Torgrip, Ryan M Taylor, Ralf Tautenhahn, and John T Prince. Massifquant: open-source kalman filter-based xc-ms isotope trace feature detection. Bioinformatics, 30(18):2636–2643, 2014.
  • Cappadona et al. [2012] Salvatore Cappadona, Peter R Baker, Pedro R Cutillas, Albert JR Heck, and Bas van Breukelen. Current challenges in software solutions for mass spectrometry-based quantitative proteomics. Amino acids, 43(3):1087–1108, 2012.
  • Tran et al. [2016] Ngoc Hieu Tran, M Ziaur Rahman, Lin He, Lei Xin, Baozhen Shan, and Ming Li. Complete de novo assembly of monoclonal antibody sequences. Scientific reports, 6, 2016.
  • Kingma and Ba [2014] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Girshick et al. [2014] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 580–587, 2014.
  • Wei et al. [2016] Xiu-Shen Wei, Chen-Wei Xie, and Jianxin Wu. Mask-cnn: Localizing parts and selecting descriptors for fine-grained image recognition. arXiv preprint arXiv:1605.06878, 2016.
  • [18] Jinwei Gu Xiaodong Yang Shalini De and Mello Jan Kautz. Dynamic facial analysis: From bayesian filtering to recurrent neural network.

Appendix

A1. Explanation of Chosen Block/Window Size

The hight of the block is chosen to be 15 pixels since that seems enough to discover the bell shaped intensity of the isotopes. On the other hand, the width of block is considered 211 pixels = 2.11 m/z, because this is sufficient to detect the equidistant property of all the charges. The isotopes in features with are 1.0 m/z (100 pixels) apart from each other. Usually peptide features have more than two isotopes. To look over three consecutive isotopes of a feature having charge , window width of 211 pixels is enough. In all other charges, the isotopes are closer to each other. Therefore, this block size let the CNN look over sufficient area of peptide features to take decision about it’s existence and charge.

A2. Supplementary material for Step 2

The model trained in step 1 is used to scan the given test LC-MS map pixel by pixel, in a column major order, from left to right, bottom to top. During scanning, the CNN produces output having value 0 to 9, indicating absence () or presence of feature ( 1 to 9).

To have a clarification about scanning procedure, please refer to Figure 7 where a small region of demo LC-MS map containing two features (charge 2 and 4) is shown. The CNN outputs in the area shown by arrow sign, since there is no feature. Then in Figure 7, we see the position where CNN starts detecting the feature with . We keep nine hash tables for recording the coordinate of CNN detection. So we record the starting point in the hash table for . Note that, we use value of 400.25 as key, and the RT value of 12.00 as a value under that key. The scanning outputs for the successive scans along that edge of the feature shown by curly brace. Therefore they indicate the continuation of same feature. So we do not insert the successive values, instead we keep track of the recent RT values and wait until CNN says . The is the last point when CNN says , because after that position the bell shape property and/or equidistant property of isotopes is lost. So RT is inserted as the ending RT value of this feature. As a result, the ending RT values of the isotopes in a feature is not recorded correctly. Now, the CNN continues to scan and another feature with is detected and it’s and RT range is inserted into the corresponding hash table as before. This process continues and the entries in the hash tables after the scanning over this area is done, is shown in Figure 9. Please note that, the last isotope in a feature may not be detected as well, because when the scanning window is going along the edge of last isotope, as shown in Figure 9, the window does not see any other isotopes following, so it cannot decide about the charge . To visualize the CNN detections, we use color Red, Green, Blue, Indigo, Lavender, Brown, Orange, Yellow, Maroon, respectively for charge . Some detections are presented in Figure 9.

Figure 6: No feature detected
Figure 7: Detection of feature starts
Figure 8: Two features are recorded in the hash tables
Figure 9: Feature detection by CNN

A3. Supplementary Material for Step 3

After CNN scans test LC-MS map, the detections recorded in hash tables are processed using some heuristics based on common feature properties, to produce a complete list of peptides showing the , and RT range of each isotopes and intensity of the feature. Please refer to Figure 10 where a small region of LC-MS map holding a feature with charge is shown (in (A)) and the result of CNN detection after scanning this region (in (B)) and corresponding records in hash table (in (C)) are shown as well. Records presenting the feature is further shown using a RT vs plot (in (D)).

Figure 10: CNN detections recorded in hash table

Following three steps are performed while processing the hash tables:

  1. Merging the RT extents: Noise during data record causes some break in a feature as shown in Figure 11.

    Figure 11: Break within a peptide feature

    Because of this the CNN detection may produce traces with small gaps as shown by arrow sign in Figure 12. We merge such small gaps if the gap pixels, that is minutes. This value is chosen by experiment.

    Figure 12: Merging of RT Extents
    Figure 13: Combine adjacent traces who are overlapped along RT axis
  2. Select the center m/z for a wider isotope: Although PEAKS reports just a single value for an isotope, but each isotope has width. That is, each isotope span over multiple pixels along axis. Therefore the CNN also produces wide detection as visible in Figure 13. However, we have to pick one m/z value for each isotope. For a set of adjacent traces representing one isotope, we calculate the intensity in terms of Area Under Curve (AUC) for each of them, and select the one that gives highest AUC as shown in Figure 14.

    Figure 14: Selection of single m/z value for each isotope
  3. Combine potential isotopes into one feature: In this step we apply some heuristics as explained below.

    • We focus on the equidistant isotope property and usual shape of features as shown in Figure 15.

      Figure 15: The left most three shapes represent peptide feature but the last shape is probably noise, therefore ignored in our method
      Figure 16: Condition between consecutive isotopes in a feature
      Figure 17: List of detected peptide features

      Please refer to Figure 17, where we see a feature having charge , and its first isotope/trace is at m/z with RT extent. It should get it’s next isotope at m/z, with overlapping RT extent , satisfying the condition shown in the figure. It is found by our experiment that in 99% cases, this relation holds between two consecutive isotopes within a same feature. We allow +-2 pixels distortion at the RT extreme points. This is also experiment based.

    • Another property is that, for two consecutive isotopes A and B in a feature: Intensity of (or the opposite). Otherwise we consider them as belonging to two different features.

    The isotopes holding these conditions are grouped in to one feature and inserted into a final list of detected peptide features. For example, the isotopes shown in Figure 14 are grouped in to one feature and listed as shown in Figure 17.

A4. Generation of Synthetic Positive Samples for Charge 5 to 9

Please refer to Figure 18 to understand the process of synthetic data generation. The window position shown by arrow sign is the usual position of cutting positive samples. Here we consider a feature having charge . Based on the assumption that CNN can detect feature from this window observing the bell shaped intensity and equidistant properties of the isotopes (besides other probable hidden properties), if we slide the window in upward direction (RT axis) along the edge of first isotope of the feature, and cut images associated with those respective window positions, then the CNN should be able to detect the same feature from these additional images, as long as bell shape and equidistant properties are not lost (also apparent from the scanning process shown in Figure 7). Therefore to generate synthetic positive samples, starting from the usual window position, we slide the window pixels in upward direction where , cut associated images from the LC-MS map, label them with the feature charge , and add those to training dataset. Please note that, validation set does not contain any synthetic image.

Figure 18: Synthetic positive sample generation for charge 5 to 9