1 Introduction
Learning from biological sequences is important for a variety of scientific fields such as evolution [8] or human health [15]
. In order to use classical statistical models, a first step is often to map sequences to vectors of fixed size, while retaining relevant features for the considered learning task. For a long time, such features were extracted from sequence alignment, either against a reference or among each others
[3]. The resulting features are appropriate for sequences that are similar enough, but they become illdefined when sequences are not suited to alignment. This includes important cases such as microbial genomes, distant species, or human diseases, and calls for alternative representations [7].String kernels provide generic representations for biological sequences, most of which do not require global alignment [33]. In particular, a classical approach maps sequences to a hugedimensional feature space by enumerating statistics about all occuring subsequences. These subsequences may be simple classical mers leading to the spectrum kernel [20], mers up to mismatches [21], or gapallowing subsequences [23]. Other approaches involve kernels based on a generative model [16, 34], or based on local alignments between sequences [35] inspired by convolution kernels [11, 36].
The goal of kernel design is then to encode prior knowledge in the learning process. For instance, modeling gaps in biological sequences is important since it allows taking into account short insertion and deletion events, a common source of genetic variation. However, even though kernel methods are good at encoding prior knowledge, they provide fixed taskindependent representations. When large amounts of data are available, approaches that optimize the data representation for the prediction task are now often prefered. For instance, convolutional neural networks
[18] are commonly used for DNA sequence modeling [1, 2, 40], and have been successful for natural language processing
[17]. While convolution filters learned over images are interpreted as image patches, those learned over sequences are viewed as sequence motifs. RNNs such as long shortterm memory networks (LSTMs)
[13] are also commonly used in both biological [14] and natural language processing contexts [5, 25].Motivated by the regularization mechanisms of kernel methods, which are useful when the amount of data is small and are yet imperfect in neural networks, hybrid approaches have been developed between the kernel and neural networks paradigms [6, 26, 39]. Closely related to our work, the convolutional kernel network (CKN) model originally developed for images [24] was successfully adapted to biological sequences in [4]. CKNs for sequences consist in a continuous relaxation of the mismatch kernel: while the latter represents a sequence by its content in mers up to a few discrete errors, the former considers a continuous relaxation, leading to an infinitedimensional sequence representation. Finally, a kernel approximation relying on the Nyström method [37]
projects the mapped sequences to a linear subspace of the RKHS, spanned by a finite number of motifs. When these motifs are learned endtoend with backpropagation, learning with CKNs can also be thought of as performing feature selection in the—infinite dimensional—RKHS.
In this paper, we generalize CKNs for sequences by allowing gaps in motifs, motivated by genomics applications. The kernel map retains the convolutional structure of CKNs but the kernel approximation that we introduce can be computed using a recurrent network, which we call recurrent kernel network (RKN). This RNN arises from the dynamic programming structure used to compute efficiently the substring kernel of [23], a link already exploited by [19] to derive their sequence neural network. Both our kernels rely on the RNN to build a representation of an input sequence by computing a string kernel between this sequence and a set of learnable filters. Yet, our model exhibits major differences with [19], who use the regular substring kernel of [23] and compose this representation with another nonlinear map—by applying an activation function to the output of the RNN. By contrast, we obtain our RKHS by relaxing the substring kernel to allow inexact matching at the compared positions. The resulting feature space can be interpreted as a continuous neighborhood around all substrings (with gaps) of the described sequence. Furthermore, our RNN provides a finitedimensional approximation of the relaxed kernel, relying on the Nyström approximation method [37]. As a consequence, RKNs may be learned in an unsupervised manner (in such a case, the goal is to approximate the kernel map), and with supervision, which may be interpreted as performing feature selection in the RKHS.
Contributions.
In this paper, we make the following contributions:
We generalize convolutional kernel networks for
sequences [4] to allow gaps, an important option for biological data.
As in [4], we observe that the kernel formulation brings practical benefits over traditional CNNs or RNNs [14] when the amount of labeled data is small or moderate.
We provide a kernel point of view on recurrent neural networks, with new unsupervised and supervised learning algorithms. The resulting RKHS can be interpreted in terms of gappy motifs, and endtoend learning amounts to performing feature selection in this RKHS.
Based on [27]
, we propose a new way to simulate max pooling in RKHSs, thus solving a classical discrepancy between theory and practice in the literature of string kernels, where sums are often replaced by a maximum operator that does not ensure positive definiteness
[35].2 Background on Kernel Methods and String Kernels
Kernel methods consist in mapping data points living in a set to a possibly infinitedimensional Hilbert space , through a mapping function , before learning a simple predictive model in [32]. The socalled kernel trick allows to perform learning without explicitly computing this mapping, as long as the innerproduct between two points can be efficiently computed. Whereas kernel methods traditionally lack scalability since they require computing an Gram matrix, where is the amount of training data, recent approaches based on approximations have managed to make kernel methods work at large scale in many cases [29, 37].
For sequences in , which is the set of sequences of any possible length over an alphabet , the mapping often enumerates subsequence content. For instance, the spectrum kernel maps sequences to a fixedlength vector , where is the set of mers—length sequence of characters in for some in , and if contains an exact occurence of and otherwise [20]. The mismatch kernel [21] operates similarly, but if contains an occurence of up to a few letters, which is useful when is large and exact occurences are rare.
2.1 Substring kernels
As [19], we consider the substring kernel introduced in [23], which allows to model the presence of gaps when trying to match a substring to a sequence . Modeling gaps requires introducing the following notation: denotes the set of indices of sequence with elements satisfying , where is the length of . For an index set in , we may now consider the subsequence of indexed by . Then, the substring kernel takes the same form as the mismatch and spectrum kernels, but counts all—consecutive or not—subsequences of equal to , and weights them by the number of gaps. Formally, we consider a parameter in , and , where if and only if , and otherwise, and is the number of gaps in the index set . When is small, gaps are heavily penalized, whereas a value close to gives similar weights to all occurrences. Ultimately, the resulting kernel between two sequences and is
(1) 
As we will see in Section 3, our RKN model relies on (1), but unlike [19], we replace the quantity that matches exact occurrences by a relaxation, allowing more subtle comparisons. Then, we will show that the model can be interpreted as a gapallowed extension of CKNs for sequences. We also note that even though seems computationally expensive at first sight, it was shown in [23] that (1) admits a dynamic programming structure leading to efficient computations.
2.2 The Nyström method
When computing the Gram matrix is infeasible, it is typical to use kernel approximations [29, 37], consisting in finding a dimensional mapping such that the kernel can be approximated by a Euclidean innerproduct . Then, kernel methods can be simulated by a linear model operating on , which does not raise scalability issues if is reasonably small. Among kernel approximations, the Nyström method consists in projecting points of the RKHS onto a dimensional subspace, allowing to represent points into a dimensional coordinate system.
Specifically, consider a collection of points in and consider the subspace
where is the Gram matrix of restricted to the samples and in carries the kernel values . This approximation only requires kernel evaluations and often retains good performance for learning. Interestingly as noted in [24], is exactly the innerproduct in between the projections of and onto , which remain in .
When
is a Euclidean space—this can be the case for sequences when using a onehot encoding representation, as discussed later— a good set of anchor points
can be obtained by simply clustering the data and choosing the centroids as anchor points [38]. The goal is then to obtain a subspace that spans data as best as possible. Otherwise, previous work on kernel networks [4, 24] have also developed procedures to learn the set of anchor points endtoend by optimizing over the learning objective. This approach can then be seen as performing feature selection in the RKHS.3 Recurrent Kernel Networks
With the previous tools in hand, we now introduce RKNs. We show that it admits variants of CKNs, substring and local alignment kernels as special cases, and we discuss its relation with RNNs.
3.1 A continuous relaxation of the substring kernel allowing mismatches
From now on, and with an abuse of notation, we represent characters in as vectors in . For instance, when using onehot encoding, a DNA sequence of length can be seen as a 4dimensional sequence where each in has a unique nonzero entry indicating which of is present at the th position, and we denote by the set of such sequences. We now define the singlelayer RKN as a generalized substring kernel (1) in which the indicator function is replaced by a kernel for mers (which is a first difference with [19]):
(2) 
where we assume that the vectors representing characters have unit norm, such that is a dotproduct kernel, and if we follow (1).
For and using the convention , all the terms in these sums are zero except those with no gap, and we recover the kernel of the CKN model of [4] with a convolutional structure—up to the normalization, which is done merwise in CKN instead of positionwise.
Compared to (1), the relaxed version (2) accommodates inexact mer matching. This is important for protein sequences, where it is common to consider different similarities between amino acids in terms of substitution frequency along evolution [12]. This is also reflected in the underlying sequence representation in the RKHS illustrated in Figure 1: by considering the kernel mapping and RKHS such that , we have
(3) 
A natural feature map for a sequence is therefore : using the RKN amounts to representing by a mixture of continuous neighborhoods centered on all its subsequences , each weighted by the corresponding (e.g., ).
3.2 Extension to all mers and relation to the local alignment kernel
Dependency in the hyperparameter
can be removed by summing over all possible values:Interestingly, we note that admits the local alignment kernel of [35]
as a special case. More precisely, local alignments are defined via the tensor product set
, which contains all possible alignments of positions between a pair of sequences . The local alignment score of each such alignment in is defined, by [35], as , where is a symmetric substitution function and is a gap penalty function. The local alignment kernel in [35] can then be expressed in terms of the above local alignment scores (Thrm. 1.7 in [35]):(4) 
When the gap penalty function is linear—that is, with , becomes . When can be written as an innerproduct between normalized vectors, we see that becomes a special case of (2)—up to a constant factor—with , .
This observation sheds new lights on the relation between the substring and local alignment kernels, which will inspire new algorithms in the sequel. To the best of our knowledge, the link we will provide between RNNs and local alignment kernels is also new.
3.3 Nyström approximation and recurrent neural networks
As in CKNs, we now use the Nyström approximation method as a building block to make the above kernels tractable. According to (3), we may first use the Nyström method described in Section 2.2 to find an approximate embedding for the quantities , where is one of the mers represented as a matrix in . This is achieved by choosing a set of anchor points in , and by encoding as —where is the kernel of . Such an approximation for mers yields the dimensional embedding for the sequence :
(5) 
Then, an approximate feature map for the kernel can be obtained by concatenating the embeddings for large enough.
The anchor points as motifs.
The continuous relaxation of the substring kernel presented in (2) allows us to learn anchor points that can be interpreted as sequence motifs, where each position can encode a mixture of letters. This can lead to more relevant representations than mers for learning on biological sequences. For example, the fact that a DNA sequence is bound by a particular transcription factor can be associated with the presence of a T followed by either a G or an A, followed by another T, would require two mers but a single motif [4]. Our kernel is able to perform such a comparison.
Efficient computations of and approximation via RNNs.
A naive computation of would require enumerating all substrings present in the sequence, which may be exponentially large when allowing gaps. For this reason, we use the classical dynamic programming approach of substring kernels [23]. Consider then the computation of defined in (5) for as well as a set of anchor points with the ’s in . We also denote by the set obtained when keeping only th first positions (columns) of the ’s, leading to , which will serve as anchor points for the kernel to compute . Finally, we denote by in the th column of such that . Then, the embeddings can be computed recursively by using the following theorem:
Theorem 1.
For any and ,
(6) 
where and form a sequence of vectors in indexed by such that , and is a vector that contains only ones, while the sequence obeys the recursion
(7)  
where is the elementwise multiplication operator and is a vector in whose entry in is and is the th character of .
A proof is provided in Appendix A and is based on classical recursions for computing the substring kernel, which were interpreted as RNNs by [19]. The main difference in the RNN structure we obtain is that their nonlinearity is applied over the outcome of the network, leading to a feature map formed by composing the feature map of the substring kernel of [23] and another one from a RKHS that contains their nonlinearity. By contrast, our nonlinearities are built explicitly in the substring kernel, by relaxing the indicator function used to compare characters. The resulting feature map is a continuous neighborhood around all substrings of the described sequence. In addition, the Nyström method yields an orthogonalization factor to the output of the network to compute our approximation, which is perhaps the only nonstandard component of our RNN. This factor provides an interpretation of as a kernel approximation. As discussed next, it makes it possible to learn the anchor points by means, see [4], which also makes the initialization of the supervised learning procedure simple without having to deal with the scaling of the initial motifs/filters .
Learning the anchor points .
We now turn to the application of RKNs to supervised learning. Given sequences in and their associated labels in , e.g., for binary classification or for regression, our objective is to learn a function in the RKHS of by minimizing
where
is a convex loss function that measures the fitness of a prediction
to the true label and controls the smoothness of the predictive function. After injecting our kernel approximation , the problem becomes(8) 
Following [4, 24], we can learn the anchor points without exploiting training labels, by applying a means algorithm to all (or a subset of) the mers extracted from the database and using the obtained centroids as anchor points. Importantly, once has been obtained, the linear function parametrized by is still optimized with respect to the supervised objective (8). This procedure can be thought of as learning a general representation of the sequences disregarding the supervised task, which can lead to a relevant description while limiting overfitting.
Another strategy consists in optimizing (8) jointly over , after observing that is a smooth function of . Learning can be achieved by using backpropagation over , or by using an alternating minimization strategy between and . It leads to an endtoend scheme where both the representation and the function defined over this representation are learned with respect to the supervised objective (8). Backpropagation rules for most operations are classical, except for the matrix inverse square root function, which is detailed in Appendix B
. Initialization is also parameterfree since the unsupervised learning approach may be used for that.
3.4 Extensions
Multilayer construction.
In order to account for longrange dependencies, it is possible to construct a multilayer model based on kernel compositions similar to [19]. Assume that is the th layer kernel and its mapping function. The corresponding th layer kernel is defined as
(9) 
where will be defined in the sequel and the choice of weights slightly differs from the singlelayer model. We choose indeed only for the last layer of the kernel, which depends on the number of gaps in the index set but not on the index positions. Since (9) involves a kernel operating on the representation of prefix sequences from layer , the representation makes sense only if carries mostly local information close to position . Otherwise, information from the beginning of the sequence would be overrepresented. Ideally, we would like the rangedependency of (the size of the window of indices before that influences the representation, akin to receptive fields in CNNs) to grow with the number of layers in a controllable manner. This can be achieved by choosing for , which assigns exponentially more weights to the mers close to the end of the sequence.
For the first layer, we recover the singlelayer network defined in (2) by defining and . For , it remains to define to be a homogeneous dotproduct kernel, as used for instance in CKNs [24]:
(10) 
Note that the Gaussian kernel used for 1st layer may also be written as (10) since characters are normalized. As for CKNS, the goal of homogenization is to prevent norms to grow/vanish exponentially fast with , while dotproduct kernels lend themselves well to neural network interpretations.
As detailed in Appendix C, extending the Nyström approximation scheme for the multilayer construction may be achieved in the same manner as with CKNs—that is, we learn one approximate embedding at each layer, allowing to replace the innerproducts by their approximations , and it is easy to show that the interpretation in terms of RNNs is still valid since has the same sum structure as (2).
Max pooling in RKHS.
Alignment scores (e.g. SmithWaterman) in molecular biology rely on a max operation—over the scores of all possible alignments—to compute similarities between sequences. However, using max in a string kernel usually breaks positive definiteness, even though it seems to perform well in practice. To solve such an issue, sumexponential is used as a proxy in [31], but it leads to unexpected diagonal dominance issue and makes SVM solvers unable to learn. For RKN, the sum in (3) can also be replaced by a max
(11) 
which empirically seems to perform well, but breaks the kernel interpretation, as in [31]. The corresponding recursion amounts to replacing all the sum in (7) by a max.
An alternative way to aggregate local features is the generalized max pooling (GMP) introduced in [27], which can be adapted to the context of RKHSs. Assuming that before pooling is embedded to a set of local features , GMP builds a representation whose innerproduct with all the local features is one: . coincides with the regular max when each is an element of the canonical basis of a finite representation—i.e., assuming that at each position, a single feature has value 1 and all others are .
Since GMP is defined by a set of innerproducts constraints, it can be applied to our approximate kernel embeddings by solving a linear system. This is compatible with CKN but becomes intractable for RKN which pools across
positions. Instead, we heuristically apply GMP over the set
for all with , which can be obtained from the RNN described in Theorem 1. This amounts to composing GMP with mean poolings obtained over each prefix of . We observe that it performs well in our experiments. More details are provided in Appendix D.4 Experiments
We evaluate RKN and compare it to typical string kernels and RNN for protein fold recognition. Pytorch code is provided with the submission and additional details given in Appendix
E.4.1 Datasets and implementation details
Sequencing technologies provide access to gene and, indirectly, protein sequences for yet poorly studied species. In order to predict the 3D structure and function from the linear sequence of these proteins, it is common to search for evolutionary related ones, a problem known as homology detection. When no evolutionary related protein with known structure is available, a—more difficult—alternative is to resort to protein fold recognition. We evaluate our RKN on such a task, where the objective is to predict which proteins share a 3D structure with the query [30].
Here we consider the Structural Classification Of Proteins (SCOP) version 1.67 [28]. We follow the preprocessing procedures of [10] and remove the sequences that are more than 95% similar, yielding 85 fold recognition tasks. Each positive training set is then extended with Uniref50 to make the dataset more balanced, as proposed in [14]. The resulting dataset can be downloaded from http://www.bioinf.jku.at/software/LSTM_protein. The number of training samples for each task is typically around 9,000 proteins, whose length varies from tens to thousands of aminoacids. In all our experiments we use logistic loss. We measure classification performances using auROC and auROC50 scores (area under the ROC curve and up to 50% false positives).
For CKN and RKN, we evaluate both onehot encoding of aminoacids by 20dimensional binary vectors and an alternative representation relying on the BLOSUM62 substitution matrix [12]
. Specifically in the latter case, we represent each aminoacid by the centered and normalized vector of its corresponding substitution probabilities with other aminoacids. The local alignment kernel (
4), which we include in our comparison, natively uses BLOSUM62.Hyperparameters.
We follow the training procedure of CKN presented in [4]. Specifically, for each of the tasks, we hold out one quarter of the training samples as a validation set, use it to tune , gap penalty and the regularization parameter
in the prediction layer. These parameters are then fixed across datasets. RKN training also relies on the alternating strategy used for CKN: we use an Adam algorithm to update anchor points, and the LBFGS algorithm to optimize the prediction layer. We train 100 epochs for each dataset: the initial learning rate for Adam is fixed to 0.05 and is halved as long as there is no decrease of the validation loss for 5 successive epochs. We fix
to 10, the number of anchor points to 128 and use single layer CKN and RKN throughout the experiments.Implementation details for unsupervised models.
The anchor points for CKN and RKN are learned by kmeans on 30,000 extracted
mers from each dataset. The resulting sequence representations are standardized by removing mean and dividing by standard deviation and are used within a logistic regression classifier.
in Gaussian kernel and the parameter are chosen based on validation loss and are fixed across the datasets. for regularization is chosen by a 5fold cross validation on each dataset. As before, we fix to 10 and the number of anchor points to 1024. Note that the performance could be improved with larger as observed in [4], at a higher computational cost.Comparisons and results.
The results are shown in Table 1. The blosum62 version of CKN and RKN outperform all other methods. Improvement against the mismatch and LA kernels is likely caused by endtoend trained kernel networks learning a taskspecific representation in the form of a sparse set of motifs, whereas dataindependent kernels lead to learning a dense function over the set of descriptors. This difference can have a regularizing effect akin to the norm in the parametric world, by reducing the dimension of the learned linear function while retaining relevant features for the prediction task. GPkernel also learns motifs, but relies on the exact presence of discrete motifs. Finally, both LSTM and [19] are based on RNNs but are outperformed by kernel networks. The latter was designed and optimized for NLP tasks and yields a auROC50 on this task.
RKNs outperform CKNs, albeit not by a large margin. Interestingly, as the two kernels only differ by their allowing gaps when comparing sequences, this results suggests that this aspect is not the most important for identifying common foldings. In particular, the advantage of the LAkernel against its mismatch counterpart is more likely caused by other differences than gap modelling, namely using a max rather than a mean pooling of mer similarities across the sequence, and a general substitution matrix rather than a Dirac function to quantify mismatches. Consistently, within kernel networks GMP systematically outperforms mean pooling, while being slightly behind max pooling.
Additional details and results, scatter plots, and pairwise tests between methods to assess the statistical significance of our conclusions are provided in Appendix E.
Method  pooling  onehot  BLOSUM62  

auROC  auROC50  auROC  auROC50  
GPkernel [10]  0.844  0.514  –  –  
SVMpairwise [22]  0.724  0.359  
Mismatch [21]  0.814  0.467  
LAkernel [31]  –  –  0.834  0.504  
LSTM [14]  0.830  0.566  –  –  
CKNseq [4]  mean  0.827  0.536  0.843  0.563 
CKNseq [4]  max  0.837  0.572  0.866  0.621 
CKNseq  GMP  0.838  0.561  0.856  0.608 
CKNseq (unsup)[4]  mean  0.804  0.493  0.827  0.548 
RKN ()  mean  0.829  0.542  0.838  0.563 
RKN  mean  0.829  0.541  0.840  0.571 
RKN ()  max  0.840  0.575  0.862  0.618 
RKN  max  0.844  0.587  0.871  0.629 
RKN ()  GMP  0.840  0.563  0.855  0.598 
RKN (unsup)  mean  0.805  0.504  0.833  0.570 
Acknowledgements
This work has been supported by the grants from ANR (FASTBIG project ANR17CE23001101) and by the ERC grant number 714381 (SOLARIS).
References

Alipanahi et al. [2015]
B. Alipanahi, A. Delong, M. T. Weirauch, and B. J. Frey.
Predicting the sequence specificities of DNAand RNAbinding proteins by deep learning.
Nature biotechnology, 33(8):831–838, 2015.  Angermueller et al. [2016] C. Angermueller, T. Pärnamaa, L. Parts, and O. Stegle. Deep learning for computational biology. Molecular Systems Biology, 12(7):878, 2016.
 Auton et al. [2015] A. Auton, L. D. Brooks, R. M. Durbin, E. Garrison, H. M. Kang, J. O. Korbel, J. Marchini, S. McCarthy, G. McVean, and G. R. Abecasis. A global reference for human genetic variation. Nature, 526:68–74, 2015.
 Chen et al. [2019] D. Chen, L. Jacob, and J. Mairal. Biological sequence modeling with convolutional kernel networks. Bioinformatics, February 2019. doi: 10.1093/bioinformatics/btz094. URL https://dx.doi.org/10.1093/bioinformatics/btz094.
 Cho et al. [2014] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoderdecoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014.
 Cho and Saul [2009] Y. Cho and L. K. Saul. Kernel methods for deep learning. In Advances in Neural Information Processing Systems (NIPS), pages 342–350, 2009.
 Consortium [2016] T. C. P.G. Consortium. Computational pangenomics: status, promises and challenges. Briefings in Bioinformatics, 19(1):118–135, 10 2016. ISSN 14774054. doi: 10.1093/bib/bbw089. URL https://doi.org/10.1093/bib/bbw089.
 Flagel et al. [2018] L. Flagel, Y. Brandvain, and D. R. Schrider. The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference. Molecular Biology and Evolution, 36(2):220–238, 12 2018. ISSN 07374038. doi: 10.1093/molbev/msy224. URL https://doi.org/10.1093/molbev/msy224.
 Giles [2008] M. B. Giles. Collected matrix derivative results for forward and reverse mode algorithmic differentiation. In Advances in Automatic Differentiation, pages 35–44. Springer, 2008.

Håndstad et al. [2007]
T. Håndstad, A. J. Hestnes, and P. Sætrom.
Motif kernel generated by genetic programming improves remote homology and fold detection.
BMC bioinformatics, 8(1):23, 2007.  Haussler [1999] D. Haussler. Convolution kernels on discrete structures. Technical report, Technical report, Department of Computer Science, University of California, 1999.
 Henikoff and Henikoff [1992] S. Henikoff and J. G. Henikoff. Amino acid substitution matrices from protein blocks. Proceedings of the National Academy of Sciences, 89(22):10915–10919, 1992.
 Hochreiter and Schmidhuber [1997] S. Hochreiter and J. Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Hochreiter et al. [2007] S. Hochreiter, M. Heusel, and K. Obermayer. Fast modelbased protein homology detection without alignment. Bioinformatics, 23(14):1728–1736, 2007.

J. Topol [2019]
E. J. Topol.
Highperformance medicine: the convergence of human and artificial intelligence.
Nature Medicine, 25, 01 2019. doi: 10.1038/s4159101803007.  Jaakkola et al. [1999] T. S. Jaakkola, M. Diekhans, and D. Haussler. Using the fisher kernel method to detect remote protein homologies. In Conference on Intelligent Systems for Molecular Biology (ISMB), 1999.
 Kalchbrenner et al. [2014] N. Kalchbrenner, E. Grefenstette, and P. Blunsom. A convolutional neural network for modelling sentences. In Association for Computational Linguistics (ACL), 2014.
 LeCun et al. [1989] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989.

Lei et al. [2017]
T. Lei, W. Jin, R. Barzilay, and T. Jaakkola.
Deriving neural architectures from sequence and graph kernels.
In
International Conference on Machine Learning (ICML)
, 2017.  Leslie et al. [2001] C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for svm protein classification. In Biocomputing, pages 564–575. World Scientific, 2001.
 Leslie et al. [2004] C. S. Leslie, E. Eskin, A. Cohen, J. Weston, and W. S. Noble. Mismatch string kernels for discriminative protein classification. Bioinformatics, 20(4):467–476, 2004.

Liao and Noble [2003]
L. Liao and W. S. Noble.
Combining pairwise sequence similarity and support vector machines for detecting remote protein evolutionary and structural relationships.
Journal of computational biology, 10(6):857–868, 2003.  Lodhi et al. [2002] H. Lodhi, C. Saunders, J. ShaweTaylor, N. Cristianini, and C. Watkins. Text classification using string kernels. Journal of Machine Learning Research (JMLR), 2:419–444, 2002.
 Mairal [2016] J. Mairal. EndtoEnd Kernel Learning with Supervised Convolutional Kernel Networks. In Advances in Neural Information Processing Systems (NIPS), 2016.
 Merity et al. [2018] S. Merity, N. S. Keskar, and R. Socher. Regularizing and optimizing lstm language models. In International Conference on Learning Representations (ICLR), 2018.
 Morrow et al. [2017] A. Morrow, V. Shankar, D. Petersohn, A. Joseph, B. Recht, and N. Yosef. Convolutional kitchen sinks for transcription factor binding site prediction. arXiv preprint arXiv:1706.00125, 2017.

Murray and Perronnin [2014]
N. Murray and F. Perronnin.
Generalized max pooling.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, 2014.  Murzin et al. [1995] A. G. Murzin, S. E. Brenner, T. Hubbard, and C. Chothia. Scop: a structural classification of proteins database for the investigation of sequences and structures. Journal of molecular biology, 247(4):536–540, 1995.
 Rahimi and Recht [2008] A. Rahimi and B. Recht. Random features for largescale kernel machines. In Adv. in Neural Information Processing Systems (NIPS), pages 1177–1184, 2008.
 Rangwala and Karypis [2005] H. Rangwala and G. Karypis. Profilebased direct kernels for remote homology detection and fold recognition. Bioinformatics, 21(23):4239–4247, 2005.
 Saigo et al. [2004] H. Saigo, J.P. Vert, N. Ueda, and T. Akutsu. Protein homology detection using string alignment kernels. Bioinformatics, 20(11):1682–1689, 2004.
 Schölkopf and Smola [2002] B. Schölkopf and A. J. Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
 Schölkopf et al. [2004] B. Schölkopf, K. Tsuda, and J.P. Vert. Kernel methods in computational biology. MIT Press, Cambridge, Mass., 2004.
 Tsuda et al. [2002] K. Tsuda, T. Kin, and K. Asai. Marginalized kernels for biological sequences. Bioinformatics, 18(suppl_1):S268–S275, 2002.
 Vert et al. [2004] J.P. Vert, H. Saigo, and T. Akutsu. Convolution and local alignment kernels. Kernel methods in computational biology, pages 131–154, 2004.
 Watkins [1999] C. Watkins. Dynamic alignment kernels. In Advances in Neural Information Processing Systems (NIPS), 1999.
 Williams and Seeger [2001] C. K. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. In Advances in Neural Information Processing Systems (NIPS), pages 682–688, 2001.
 Zhang et al. [2008] K. Zhang, I. W. Tsang, and J. T. Kwok. Improved nyström lowrank approximation and error analysis. In International Conference on Machine Learning (ICML), 2008.
 Zhang et al. [2017] Y. Zhang, P. Liang, and M. J. Wainwright. Convexified convolutional neural networks. In International Conference on Machine Learning (ICML), 2017.
 Zhou and Troyanskaya [2015] J. Zhou and O. Troyanskaya. Predicting effects of noncoding variants with deep learningbased sequence model. Nature Methods, 12(10):931–934, 2015.
Appendix A Nyström Approximation for SingleLayer RKN
We detail here the Nytröm approximation presented in Section 3.3, which we recall here for a sequence :
(12) 
Consider then the computation of defined in (12) for given a set of anchor points with the ’s in . Given the notations introduced in Section 3.3, we are now in shape to prove Theorem 1.
Proof.
The proof is based on Theorem 1 of [19] and definition 2 of [23]. For , let us denote by the first entries of . We first notice that for the Gaussian kernel , we have the following factorization relation for
Thus
with defined as in the theorem.
Let us denote by if and by if . We want to prove that and . First, it is clear that for any . We show by induction on that . When , we have
and have the same recursion and initial state thus are identical. When and suppose that , then we have
and have the same recursion and initial state. We have thus proved that . Let us move on for proving by showing that they have the same initial state and recursion. It is straightforward that , then for we have
Therefore . ∎
Appendix B Backpropagation for Matrix Inverse Square Root
In Section 3.3, we have described an endtoend scheme to jointly optimize and . The backpropagation of requires computing that of the matrix inverse square root operation as it is involved in the approximate feature map of as shown in (12). The backpropagation formula is given by the following proposition, which is based on an errata of [24] and we include it here for completeness.
Proposition 1.
Given a symmetric positive definite matrix in and the eigencomposition of is written as where is orthogonal and
is diagonal with eigenvalues
. Then(13) 
Proof.
First, let us differentiate with respect to the inverse matrix :
Then, by applying the same (classical) trick,
By multiplying the last relation by on the left and by on the right.
Note that is diagonal. By introducing the matrix such that , it is then easy to show that
where is the Hadamard product between matrices. Then, we are left with
∎
When doing backpropagation, one is usually interested in computing a quantity such that given (with appropriate dimensions), we have
see [9], for instance. Here, denotes the Frobenius inner product. Then, it is easy to show that
Appendix C Multilayer Construction of RKN
For multilayer RKN, assume that we have defined the th layer kernel. To simplify the notation below, we consider that an input sequence is encoded at layer as where the feature map at position is . The layer kernel is defined by induction by
(14) 
where is defined in (10. With the choice of weights described in Section 3.4, the construction scheme for an layer RKN is illustrated in Figure 2
The Nyström approximation scheme for multilayer RKN is straightforward by inductively applying the Nytröm method to the kernels from bottom to top layers. Specifically, assume that is approximated by such that the approximate feature map of at position is . Now Consider a set of anchor points with the ’s in which have unit norm at each column. We use the same notations as in singlelayer construction. Then very similar to the singlelayer RKN, the embeddings are given by the following recursion
Theorem 2.
For any and ,
where and form a sequence of vectors in
Comments
There are no comments yet.