Molecular Sparse Representation by 3D Ellipsoid Radial Basis Function Neural Networks via L_1 Regularization

05/09/2020 ∙ by Sheng Gui, et al. ∙ Chinese Academy of Science 0

In this paper, we have developed an ellipsoid radial basis function neural network (ERBFNN) and algorithm for sparse representing of a molecular shape. To evaluate a sparse representation of the molecular shape model, the Gaussian density map of molecule is approximated by ERBFNN with a relatively small number of neurons. The deep learning models were trained by optimizing a nonlinear loss function with L_1 regularization. Experimental results demonstrate that the original molecular shape is able to be represented with good accuracy by much fewer scale of ERBFNN by our algorithm. And our network in principle can be applied to multi-resolution sparse representation of molecular shape and coarse-grained molecular modeling.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 7

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With recent advances in the field of deep learning in general, neural networks have been widely and successfully used for different tasks in computer vision, including object detection

Sermanet et al. (2013); Girshick (2015), image classification He et al. (2016); Szegedy et al. (2016) and semantic segmentation Ronneberger et al. (2015); Krähenbühl and Koltun (2012). As a special class of feedforward neural networks (FNNs), radial basis function (RBF) networks have certain advantages over other types of FNNs, such as simpler network structures and faster training process. Due to good approximation capabilities, single-output RBF networks are usually utilized to model nonlinear functions in engineering applications. In practice, learning of RBF networks includes two assignments that determining the RBF network structure and optimizing the adaptable parameters (such as, centers and their radii of RBF neurons, and linear output weights). The Peng et al. (2007) has described, if the two tasks are meanwhile taken into consideration, it becomes a mixed integer programming of hard problem. Owing to the lack of promising methods to address this integrated problem, the two tasks are solved respectively in many learning algorithms of RBF networks Schwenker et al. (2001)

. In this case, the network structure is determined in advance, and the parameters are then trained by algorithms of supervised learning. As well known that directly optimizing the empirical risk may lead to an overfitting problem, which causes poor generalization capability. To tackle this issue, the regularization techniques, such as

and Andrew and Gao (2007); Schölkopf et al. (2007); Perkins et al. (2003); Shen et al. (2013); Vidaurre et al. (2010)

regularization, are wealthy in modern machine learning. Fundamentally, a regularization term is added to the empirical risk to penalize over-complicated solutions.

regularization is implemented by appending a weighted

norm of the parameter vector to the loss function, which ensures the sum of the absolute values of the parameters to be small. While

regularization uses the norm, which encourages the sum of the squares of the parameters to be small. There has been an increasing interest in regularization because of its advantages over regularization Qian et al. (2017). For example, regularization usually produces sparse parameter vectors in which many parameters are closed to zero. Thus, the more sparse solution can be obtained. In particular, if one could use deep learning Girosi (1998) to representing sparsely the shape for biomolecules, that would result in faster and more efficient way for such as molecular docking, alignment, drug design and multiscale modeling.

Biomolecules such as proteins are the fundamental functional units of life activities. Geometric modeling of biomolecules plays an important role in the fields of computer-aided drug design and computational biology Liao et al. (2015). In the computer-aided drug design field, biomolecular shape has been a vital issue of considerable interest for many years, for instance, shape-based docking problems McGann et al. (2003), molecular shape comparisons Grant et al. (1996), calculating molecular surface areas Liu et al. (2015); Weiser et al. (1999), coarse-grained molecular dynamics Wang et al. (2019), and the generalized Born modelsYu et al. (2006), etc. And biomolecules geometric shape (especially molecular surface) is prerequisite for using boundary element method (BEM) and finite element method (FEM) in the implicit solvent models Lu et al. (2008). Considering the highly complex and irregular shape of a molecule, new challenges arise in simulations involving extremely large biomolecular Chen and Lu (2013) (e.g., viruses, biomolecular complexes etc.). And the efficient representation of the molecular shape (as well as the ”molecular surface” or ”molecular volume”) for large real biomolecule with high quality remains a critical topic Lu et al. (2008).

The molecular shape is defined in various senses Connolly (1983); Gerstein et al. (2001); Duncan and Olson (1993). For molecular volume, the Gaussian density map is a suitable representation of the molecular shape, since the Gaussian density maps provide a realistic representation of the volumetric synthetic electron density maps for the biomolecules Duncan and Olson (1993). For molecular surface, there are four important biomolecular surfaces: van der Waals (VDW) surface, solvent accessible surface (SAS) Lee and Richards (1971), solvent excluded surface (SES) Richards (1977) and Gaussian surface. The van der Waals surface is the smallest envelope enclosing a collection of spheres representing all the atoms in the system with their van der Waals radii. The SAS LEE and RICHARDS (1971) is the trace of the centers of probe spheres rolling over the van der Waals surface. The SES Richards (1977) is the surface traced by the inward-facing surface of the probe sphere. The Gaussian surface Weiser et al. (1999); Zhang et al. (2006) is a level-set of the summation of the spherically symmetrical volumetric Gaussian density distribution centered at each atom of the biomolecular system. In 2015, Liu et al. presented that the VDW surface, SAS, and SES can be approximated well by the Gaussian surface with proper parameter selection in sense of parameterizations for molecular Gaussian surface Liu et al. (2015). Comparing with VDW surface, SAS and SES, the Gaussian surface is smooth and has been widely used in many problems in computational biology McGann et al. (2003); Grant et al. (1996); Weiser et al. (1999); Wang et al. (2019); Yu et al. (2006). Thus, in this paper, we adopt ellipsoid RBF neural network to approximate to the Gaussian density maps of molecular shape. The Gaussian density maps and the Gaussian surface descriptions of the specific forms will be given in the next section.

For Gaussian density maps, the volume Gaussian function is constructed by a summation of Gaussian kernel functions, whose number depends on the total number of atoms in the molecular. Thus, the computational cost for biomolecular surface construction increases as the atom number (number of Gaussian kernel functions) becomes progressively larger. It leads to a significant challenge for their analysis and recognition. In case of large biomolecules, the number of kernels in their definition of Gaussian molecular surface may achieve millions. In 2015, Zhang et al. Liao et al. (2015) put forward an atom simplification method for the biomolecular structure based on Gaussian molecular surface. This method contains two main steps. The first step eliminates the low-contributing atoms. The second step optimizes the center location, the radius and the decay rate of the remaining atoms based on gradient flow method.

In the area of computer aided geometric design, the Gaussian surface is a classical implicit surface representing method. Over that last few decades, there are a mount of works focusing on the implicit surface reconstruction problem, and various approaches have been presented. J.C. Carr Carr et al. (2001) proposed a method to reconstruct an implicit surface with RBFs and performed a greedy algorithm to append centers with large residuals to decrease the number of basis functions. However, the result of this method is not sparse enough. M. Samozino Samozino et al. (2006) presented a strategy to put the RBFs centers on the Voronoi vertices. This strategy, firstly picks a user-specified number of centers by filtering and clustering from a subset of Voronoi vertices, then gets the reconstructed surface by solving a least-square problem. However, it leads to larger approximation error on the surface while approximating the surface and center points equally. In 2016, Chen Li et al. (2016) et al. proposed a model of sparse RBF surface representations. They constructed the implicit surface based on sparse optimization with RBF. And the initial Gaussian RBF is on the medial axis of the input model. They have solved the RBF surface by sparse optimization technique. Sparse optimization has become a very popular technique in many active fields, for instance, signal processing and computer vision, etc Elad (2010)

. This technique has been applied in linear regression

Maatta et al. (2016), deconvolution Taylor et al. (1979), signal modeling RISSANEN (1978), preconditioning Grote and Huckle (1997), machine learning Girosi (1998), denoising Chen et al. (1998), and regularization Daubechies et al. (2004). In the last few years, sparse optimization also has been applied in geometric modeling and graphics problems (refer to a review Xu et al. (2015)).

In this paper, based on the structure of RBF network, we propose an ellipsoid RBF neural network for reducing the number of kernels in the definition of Gaussian surface while preserving the shape of the molecular surface. We highlight several differences and main contributions between our method and previous optimization methods with shape representation:

  1. Compared with other works, our focus is mainly on reducing the number of kernels in Gaussian density maps by pruning useless ellipsoid RBF neuron through regularization;

  2. The loss function of our model is a complicated nonlinear function with respect to the locations, sizes, shapes and orientations of RBFs;

  3. Different initializations and training network algorithms are proposed for solving the corresponding optimization problem in our model.

The remainder of this paper is organized as follows. Section 2 reviews some preliminary knowledge about volumetric electron density maps, Gaussian surface, ellipsoid Gaussian RBF and ellipsoid RBF network, then presents our model together with an algorithm for representing the Gaussian density maps sparsely. The experimental results and comparisons are demonstrated in section 3. We conclude the paper in section 4.

2 Methods

2.1 Brief review of volumetric electron density maps, Gaussian surface, ellipsoid Gaussian RBF and ellipsoid RBF network

2.1.1 Volumetric electron density maps

Volumetric electron density maps are often modelled as the volumetric Gaussian density maps . The definition of the volumetric Gaussian density maps is as follows,

(1)

where the parameter is positive and controls the decay rate of the kernel functions, and are the location and radius of atom .

2.1.2 Gaussian surface

The Gaussian surface is defined as a level set from volumetric synthetic electron density maps,

(2)

where is the isovalue, and it controls the volume enclosed by the Gaussian density maps. Fig. 1 shows an example of a Gaussian surface. This molecule contains the entire 70S ribosome, including the 30S subunit (16S rRNA and small subunit proteins), 50S subunit (23S rRNA, 5S rRNA, and large subunit proteins), P- and E-site tRNA, and messenger RNA. This molecule is obtained from 70S ribosome3.7A model140.pdb.gz on http://rna.ucsc.edu/rnacenter/ribosomedownloads.html. Fig. 1 shows all the atoms in the molecule, and Fig. 1 shows the corresponding Gaussian surface.

Figure 1: An example of Gaussian molecular surface via VCMMBai and Lu (2014). (a) shows the VDW surface, and (b) shows the Gaussian molecular surface generated by TMSmesh Chen et al. (2012); Chen and Lu (2011); Liu et al. (2018) with parameter and being setting as 0.9 and 1.0, respectively. All coordinates and corresponding radii are drawn from the PQR file that is transformed from the PDB file, using the PDB2PQR tool Dolinsky et al. (2004).

2.1.3 Ellipsoid Gaussian RBF

The RBF is written as , where is a nonnegative function defined on , the is center location of th basis function. The RBF has basic properties as following, and . A typical choice of RBF is Gaussian function

(3)

In addition, there are other RBF including thin plate spline RBF, e.g., for .

Compared with other RBF, we put forward ellipsoid RBF with parameters that respect to the locations, sizes, shapes and orientations. The ellipsoid Gaussian RBF can be rewritten as,

(4)

where is the center of the ellipsoid Gaussian RBF, , where defines the length of ellipsoid along three main axis, is the total rotation matrix, and it is equal to the product of rotation matrices from three directions

(5)

and is a rotation matrix of x direction:

(6)

is a rotation matrix of y direction:

(7)

is a rotation matrix of z direction:

(8)

so that is equal to:

(9)

2.1.4 Ellipsoid RBF Networks

The RBF network is a special FNN consisting of three layers:

The choice of activation function is the ellipsoid Gaussian function. For an input , the output of the ellipsoid RBF network is calculated by

(10)

where is the th ellipsoid RBF center of hidden layer, represents the lengths of corresponding ellipsoid RBF along three main axes of hidden layer, is a rotation matrix of the th neuron. is the output weight between the th hidden neuron and the output node. And is the norm of vector.

Denote the parameters (i.e., the weights connecting the neuron to the output layer, lengths of centers, centers coordinate and rotation angles) of the hidden neuron by . And the descriptions of the specific forms of will be given in the following section. Assume the training data set is given by , where is the th input pattern and is the desired output value for the th input pattern. The actual output vector can be calculated by

(11)

where

(12)

is an output value vector for input patterns, is a vector, is the weight connecting the th hidden neuron to the output layer. The error vector is defined as

(13)

with .


Figure 2: The structure of ellipsoid RBF neural network.

2.2 Model and algorithm

2.2.1 Modeling with ellipsoid RBF network

The major goal of the study was to create the sparse representation of Gaussian molecular model by the ellipsoid RBF neural network. According to the definition of volumetric electron density maps and structure of the ellipsoid RBF network, the loss function of representing sparsely Gaussian molecular is as follows,

(14)

and corresponding constrained condition is

(15)

The first term in Eq. 14 is a regularization term to reduce term both network complexity and overfitting. The formulate is as follows,

(16)

where , .

The second is density error of the sparsely representing molecule and original molecule at training points set . We have

(17)

where is an ellipsoid RBF neuron network. is the volumetric electron density map (Eq. 1). It is to be approximated by . is the th training point. is the center of the th activation function of the ellipsoid RBF. define the lengths of ellipsoid along three main axis. is a rotation matrix. The are rotation angles of the th activation function of the ellipsoid RBF neuron, , . is the number of the ellipsoid RBF neurons. is the number of the training point set. The is the network parameter, the formula is

(18)

where , , , , , .

The two parameters and are used to balance the two targets: accuracy () and sparsity (-regularization). And the constrained conditions are explained as follows, (i) indicates that the corresponding ellipsoid Gaussian RBF is nonnegative which means each RBF in can be seen as a new real physical atom with ellipsoid shape. (ii) implies the activation function is zero at infinity, which is consistent with the fitted function . In order to transform the Eq. 14 and Eq. 15 to an unconstrained loss function, we do the following substitution,

(19)

and corresponding . For simplicity, we still use to denote .

Thus, the loss function of the ellipsoid RBF network for representing sparsely a molecule is:

(20)

2.2.2 Overview

In this section, we describe the algorithms to construct sparse representation of Gaussian molecular density by the ellipsoid RBF neural network. The inputs of our method are PQR files which include a list of centers and radii of atoms. The output of our method are network parameters which contain the centers, the lengths, the rotation angles of the ellipsoid RBF neural network and the weights connecting the hidden neurons to the output layer. The algorithm outline is as follows: first, set the training points and label corresponding value . Second, initialize the ellipsoid RBF network (i.e., the number of neuron, the parameters of the ellipsoid RBF neural network). Third, optimize the loss function in Eq. 20 using an ADAM algorithm Kingma and Ba (2014) to minimizing the sparsity and error terms in Eq. 20 alternatively. Fig. 3 demonstrates the process of our algorithm. The result shows that, using our method, the original Gaussian surface is approximated well by a summation of much fewer ellipsoid Gaussian RBFs.


Figure 3: The pipline of our algorithm and results in each step.

2.2.3 Training points set initialization and labelling

In order to train network, in the first part, the training points set is initialized. The molecule is put in the bounding box (Fig. 4) in . The range of bounding box is , where . The bounding box is discretized into a set of uniform grid as shown in Fig. 4. The training points are the grid points defined as follows

(21)

where . And are respectively total number of index .

Figure 4: training points set initialization. (a) shows a real molecule (PDBID: 1GNA) within a bounding box. (b) shows a set of uniform grid of the bounding box. (c) shows initial training points.

In the second part, the points of train set can be labelled for training network parameter, the label of is calculated in the following form: . A set of training points is chosen from the set of uniform grid points , and to achieve good preservation of the molecular shape the selected points are close to the Gaussian surface defined in Eq 2. In this paper, the training points set satisfying are selected.

2.2.4 Parameter initialization of ellipsoid RBF neural network

In this section, we initialize the ellipsoid RBF neural network parameters defined in Eq. 18. Based on the Gaussian RBF is a degradation case of the ellipsoid Gaussian RBF, the activation function can be initialized as same as . Thus, the strategy of initialization is as follows,

  1. The lengths of ellipsoid RBFs neural network is set to be constant vector. In this paper, the can be set as follows,

    (22)

    where is the number of atom.

  2. The angles of ellipsoid RBFs neural network are set to be zeros,

    (23)
  3. The center coordinates of ellipsoid RBFs neural network are given by the centers of atoms as follows,

    (24)

    where are coordinates of the th atom.

  4. While have been chosen and atom radii is given, to initialize ellipsoid RBF activation function as the same as RBF , we set the weight of ellipsoid Gaussian RBF neural network as follows,

    (25)

2.2.5 Sparse optimization

After initialization of ellipsoid RBF neural network, the sparsity of Gaussian RBF representation is computed by minimizing the loss function (Eq. 20). Algorithm 1 represents the main modules of our sparse optimization method, which is described below.

1.05

Algorithm 1 Sparse optimization
1:  Input: PQR file including coordinates of centers and radii of atoms.
2:  Output: The list of parameters of ellipsoid RBF neural network, i.e. solution of in minimizing .
3:  Step 1. initialize network parameters as shown in Section 2.2.4.
4:  Step 2. select training points set as shown in Section 2.2.3.
5:  Step 3. set the number of maximum iteration and number of sparse optimization iteration . set the coefficients , in Eq. 14.
6:  Step 4. initialize the variable of iteration: and set tolerance: and .
7:  Step 5. select the size of batch for optimization algorithm.
8:  Step 6. optimize two terms of loss function in Eq. 20 alternatively.
9:  while  do
10:     
11:     Step 6.1. prune the useless the ellipsoid RBF neuron every steps.
12:     Step 6.2. calculate for training points set by .
13:     Step 6.3. check the maximum of error between and at training points set and update the coefficients and .
14:     if  then
15:        
16:     end if
17:     Step 6.4. accuracy optimization for by set coefficients and .
18:     if  then
19:        
20:     end if
21:     Step 6.5. optimize the loss function by batch ADAM algorithm.
22:  end while

Step 1 shows initialization of parameters for ellipsoid RBF neural network. Step 2 selects training points set . Step 3 and step 4 initialize some variables, i.e., the number of total iterations, the number of sparse optimization iterations, error tolerance and the coefficients , . Step 5 sets the size of batch () for optimization algorithm. Step 6 shows the numerical algorithm of optimization for loss function (Eq. 20). Step 6.1 prunes useless the ellipsoid RBF neuron if the corresponding weight connecting the th hidden neuron to the output layer is less than per steps. In this paper, we set and . Step 6.2 calculates the prediction value for all training points set. Step 6.3 checks the maximum of error between and at training points set and updates the coefficients and , where . Step 6.4, after doing iterations, with the number of effective neurons fixed, keeps doing some steps of minimization of to achieve better accuracy of the approximation on training points set. Step 6.5 updates the network parameter and optimizes loss function by batch ADAM method. The pipeline of step 6.5 is as follows,

(26)

where is learning rate. , and are set for default value (), is

th biased first moment estimate.

is the th biased second raw estimate.

3 Results and discussion

In this section, we present some numerical experimental examples to illustrate the effectiveness of our network and method for representing the Gaussian surface sparsely. Comparisons are made among our network, the original definition of Gaussian density maps and sparse RBF method Li et al. (2016)

. A set of biomolecules taken from the RCSB Protein Data Bank is chosen as a benchmark set. The number of atoms in these biomolecules ranges from hundreds to thousands. These molecules are chosen randomly from RCSB Protein Data Bank, and no particular structure is specified. The implementation of the algorithms is based on the PyTorch. All computations were run on an Nvidia Tesla P40 GPU. Further quantitative analysis of the result is given in the following subsections.

3.1 Sparse optimization results

Twenty biomolecules are chosen to be sparsely represented by the ellipsoid RBFs neural network and sparse RBF method Li et al. (2016). For fair comparison, the initial centers of RBFs are selected to be atom center coordinates for both methods. Table 1 shows the final number of effective basis from the results of our method and sparse RBF method.

INDEX PDBID NATOM Sparse RBF Our method
1 ADP 39 13 5
2 2LWC 75 51 6
3 3SGS 94 56 9
4 1GNA 163 108 17
5 1V4Z 266 266 22
6 1BTQ 307 252 25
7 2FLY 355 267 28
8 6BST 478 315 49
9 1MAG 552 502 46
10 2JM0 569 424 52
11 1BWX 643 537 54
12 2O3M 714 566 62
13 FAS2 906 722 76
14 2IJI 929 742 72
15 3SJ4 1283 953 132
16 3LOD 2315 1810 180
17 1RMP 3514 2871 271
18 1IF4 4251 3288 301
19 1BL8 5892 3491 452
20 AChE 8280 4438 748
Table 1: The number of atoms for 20 test proteins. The fourth column shows the number of RBFs by sparse RBF method. The last column shows the number of the ellipsoid RBF neural network. The decay rate in Eq. 1 is uniformly taken as 0.5.

Fig. 5 presents the relation between the number of ellipsoid RBF neurons in final sparse representation and the number of atoms in the corresponding molecule. The number of atoms for the original Gaussian molecular surfaces is shown by green lines with pentagram markers. To present sparsity of final results from our method, we define the sparse ratio as: , where is the number of ellipsoid RBF neurons and presents the number of atoms. In Fig. 5, the changes of sparse ratios with respect to number of atoms for different decay rates ( in Eq. 1 equals to 0.3, 0.5 and 0.7) are shown by solid lines with square, circle and triangle markers. The slope of dashed line is the lower bound of sparse ratio (). The slope of dotted line is the upper bound of sparse ratio (). The sparse ratios in the results of our numerical experiments are in (). The results show that the larger the decay rate (leading to more rugged and complex molecular surface), the bigger the sparse ratio is going to be. The sparse ratios for the Gaussian molecular surface with are smaller than those of Gaussian molecular surface with as shown in Fig. 5.


Figure 5: Relationship between the number of atoms and the number of ellipsoid RBF neurons after sparse representation.

Fig. 6 shows the loss function and the number of ellipsoid RBF neurons is decreasing as the number of iterations increases in the experiment for molecule 1MAG. In this experiment, the and are set to be 10000 and 6000, respectively. After 6000 iterations, is set to be zero to minimize term solely, thus the value of loss function has an abrupt change. The number of ellipsoid RBF neurons decreases dramatically during the iteration process. As shown in Fig. 6, the model with 46 ellipsoid RBF neurons achieves the minimum error with a relatively less number of ellipsoid RBF neurons.

Figure 6: One test of our algorithm on molecule 1MAG. The blue curve is the objective function trajectory during the 10000 iterations. The red line represents the number of basis functions. The number of initial ellipsoid RBF neurons for this trial is 552 and the number of final ellipsoid RBF neurons is 46.

The complexity of training algorithm for our network is almost , which is shown in Fig 7.


Figure 7: Computational performance of training algorithm for our network.

3.2 Shape preservation and further results analysis

In this subsection, we first check whether the Gaussian surface is preserved after the process of sparse representation through our method. The area, the enclosed volume and the Hausdorff distance are the three criteria to judge whether two surfaces are close enough. These criteria can be calculated on the triangular mesh of the surface. The triangular meshes of molecular surfaces before and after sparse representation are generated through function in MATLAB. For a triangular surface mesh, the surface area is determined using the following equation:

(27)

where is the number of triangle elements and denote the coordinates of the three vertices for the th triangle.

The volume V enclosed by the surface mesh is determined using the following equation:

(28)

where is the vector from the center of the th triangle to the origin.

The relative errors of area/volume and the Hausdorff distance are used to characterize the difference between the surfaces before and after sparse representation. The relative errors of area and volume are calculated using the following formulas:

(29)
(30)

where and denote the surface areas of the original and our sparsely represented surfaces respectively. and denote the corresponding surface volumes of the original and our surfaces respectively.

The Hausdorff distance between two surface meshes is defined as follows,

(31)

where

(32)

and are two piecewise surfaces spanned by the two corresponding meshes, and is the Euclidean distance between the points and . In our work, we use Metro Cignoni et al. (1998) to compute the Hausdorff distance.

The areas and the volumes enclosed by the surface before and after the sparse representation for each of the molecules are listed in Table 2. The Hausdorff distances between the original surface and the final surface for the biomolecules are also listed in Table 2.

Molecule Area () Volume () Distance ()
Original Our Original Our
ADP 367.9334 358.4047 0.0259 458.0317 454.5578 0.0076 0.6605
2LWC 504.8863 494.5004 0.0206 856.9564 850.5393 0.0075 0.4497
1GNA 1006.1213 995.4826 0.0106 1862.7883 1855.4815 0.0039 0.4764
1BTQ 1782.1843 1749.4445 0.0184 3412.7345 3406.8652 0.0017 0.6027
1MAG 2479.4398 2438.4246 0.0165 5732.9858 5700.8069 0.0056 0.5441
1BWX 2925.0557 2890.7706 0.0117 6638.2112 6609.3813 0.0043 0.7311
FAS2 3771.6093 3690.4698 0.0215 9198.0803 9168.8722 0.0032 0.7484
2IJI 3783.6502 3731.7199 0.0137 9537.9781 9502.8469 0.0037 0.6187
3SJ4 5887.9106 5797.6176 0.0153 13208.3953 13175.7877 0.0025 0.7209
Table 2: The areas, volumes and Hausdorff distances obtained with the original and the final surfaces for ten biomolecules. Note: isovalue , initial decay rate .

Fig. 8 illustrates some fitted results of the sparse optimization model. The first column shows original Gaussian surface for five molecules. The second column is the final ellipsoid Gaussian surface in our method, where the blue points represent the location of Gaussian RBF centers. It indicates our method needs less number of ellipsoid RBFs neurons to represent surface. The third column is the original surface overlapped with the final surface. It shows that the final surface is close to the original surface. The last column shows the configurations of ellipsoid RBF neurons in the sparse representation of five molecules from our method. It demonstrates that after the process of sparse representation, the number of ellipsoid RBF neurons are much sparser than the RBFs in the original definition of Gaussian surface. And, obviously, each ellipsoid RBF is a local shape descriptor of the molecular shape.

3.3 Electrostatic solvation energy calculation based on the sparsely represented surface

The algorithms introduced in the method section are used to generate the sparse surface. We here also test the applicability of the original and the sparse surface in the computations of Poisson-Boltzmann (PB) electrostatics. The boundary element method software used is a publicly available PB solver, AFMPB Zhang et al. (2015). Table 3 shows that AFMPB can undergo and produce converged results using the sparse represented surface, and the calculated solvation energies are close to the results using the original surface. Fig. 9, using VISIM (www.xyzgate.com), shows the computed electro-static potentials mapped on the molecular surface. In the future, we can consider adding the charge information to the sparse representation.

Molecule Solvation energy
Original Sparse Relative error
ADP -2.25992e+02 -2.30075e+02 0.0181
2FLY -2.38927e+02 -2.42670e+02 0.0157
6BST -9.16715e+02 -9.20137e+02 0.0037
2O3M -3.03482e+03 -3.05604e+03 0.0070
2IJI -6.59502e+02 -6.65894e+02 0.0097
Table 3: The solvation energy obtained with the original surface and the sparse represented surface for five biomolecules. Note: isovalue , initial decay rate .
Figure 8: Fitting results of our optimization algorithm. Left to right: Original surface (left column), Final surface (middle left column) and Original surface overlapped with Final surface (middle right column), the ellipsoid Gaussian RBFs in the sparse representation from our method (last column). From top to bottom: 1MAG (first row), FAS2 (second row), 3LOD (third row), 1BL8 (fourth row) and AChE (fifth row). The blue points represent the locations of Gaussian RBF centers.
Figure 9: Electrostatic potential on molecular surfaces, calculated with AFMPB. From left to right: ADP, 2FLY, 6BST, 2O3M and 2IJI.

4 Conclusion

In this paper, a sparse Gaussian molecular shape representation based on ellipsoid RBF neural network is proposed for arbitrary molecule. The original Gaussian density maps is approximated with the ellipsoid RBF neural network. The sparsity of the ellipsoid RBF neural network is computed by solving an regularization optimization problem. Comparisons and experimental results indicate that our network needs much less number of ellipsoid RBF neurons to represent the original Gaussian density maps.

Acknowledgements

This work was supported by the National Key Research and Development Program of China (grant 2016YFB0201304) and the China NSF (NSFC 11771435, NSFC 21573274).

References

  • Andrew and Gao (2007) Andrew, G., Gao, J., 2007. Scalable training of l1-regularized log-linear models. In: In ICML ’07.
  • Bai and Lu (2014) Bai, S., Lu, B., 2014. VCMM: A visual tool for continuum molecular modeling. J. Mol. Graph. Model. 50, 44–49.
  • Carr et al. (2001) Carr, J. C., Beatson, R. K., Cherrie, J. B., Mitchell, T. J., Fright, W. R., McCallum, B. C., Evans, T. R., 2001. Reconstruction and representation of 3d objects with radial basis functions. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’01. pp. 67–76.
  • Chen and Lu (2011) Chen, M., Lu, B., JAN 2011. TMSmesh: A robust method for molecular surface mesh generation using a trace technique. J. Chem. Theory Comput. 7 (1), 203–212.
  • Chen and Lu (2013) Chen, M., Lu, B., JUN 2013. Advances in biomolecular surface meshing and its applications to mathematical modeling. Chin. Sci. Bull. 58 (16), 1843–1849.
  • Chen et al. (2012) Chen, M., Tu, B., Lu, B., SEP 2012. Triangulated manifold meshing method preserving molecular surface topology. J. Mol. Graph. Model. 38, 411–418.
  • Chen et al. (1998) Chen, S., Donoho, D., Saunders, M., 1998. Atomic decomposition by basis pursuit. SIAM J. Sci. Computing. 20 (1), 33–61.
  • Cignoni et al. (1998) Cignoni, P., Rocchini, C., Scopigno, R., JUN 1998. Metro: Measuring error on simplified surfaces. Comput Graph Forum 17 (2), 167–174.
  • Connolly (1983) Connolly, M., 1983. Analytical molecular-surface calculation. J. Appl. Crystallogr 16 (OCT), 548–558.
  • Daubechies et al. (2004) Daubechies, I., Defrise, M., De Mol, C., NOV 2004. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57 (11), 1413–1457.
  • Dolinsky et al. (2004) Dolinsky, T., Nielsen, J., McCammon, J., Baker, N., JUL 1 2004. PDB2PQR: an automated pipeline for the setup of Poisson-Boltzmann electrostatics calculations. Nucleic Acids Res. 32 (2), W665–W667.
  • Duncan and Olson (1993) Duncan, B. S., Olson, A. J., FEB 1993. Shape-analysis of molecular-surfaces. Biopolymers 33 (2), 231–238.
  • Elad (2010) Elad, M., 2010. Sparse and Redundant representations: From theory to applications in signal and image processing, 1st Edition. Springer Publishing Company.
  • Gerstein et al. (2001) Gerstein, M., Richards, F. M., Chapman, M. S., Connolly, M. L., 2001. Protein surfaces and volumes: measurement and use f, 531–545.
  • Girosi (1998)

    Girosi, F., AUG 15 1998. An equivalence between sparse approximation and support vector machines. Neural Computation 10 (6), 1455–1480.

  • Girshick (2015) Girshick, R., 2015. Fast R-CNN. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). ICCV ’15. IEEE Computer Society, Washington, DC, USA, pp. 1440–1448.
  • Grant et al. (1996) Grant, J., Gallardo, M., Pickup, B., NOV 15 1996. A fast method of molecular shape comparison: A simple application of a Gaussian description of molecular shape. J. Comput. Chem. 17 (14), 1653–1666.
  • Grote and Huckle (1997) Grote, M., Huckle, T., MAY 1997. Parallel preconditioning with sparse approximate inverses. SIAM J. Sci. Computing. 18 (3), 838–853.
  • He et al. (2016)

    He, K., Zhang, X., Ren, S., Sun, J., jun 2016. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, Los Alamitos, CA, USA, pp. 770–778.

  • Kingma and Ba (2014) Kingma, D. P., Ba, J., Dec 2014. Adam: A Method for Stochastic Optimization. arXiv e-prints, arXiv:1412.6980.
  • Krähenbühl and Koltun (2012) Krähenbühl, P., Koltun, V., Oct 2012. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials. arXiv e-prints, arXiv:1210.5644.
  • LEE and RICHARDS (1971) LEE, B., RICHARDS, F., 1971. Interpretation of protein structures - estimation of static accessibility. J. Mol. Biol. 55 (3), 379–&.
  • Lee and Richards (1971) Lee, B., Richards, F. M., 1971. The interpretation of protein structures: Estimation of static accessibility. J. Mol. Biol. 55 (3), 379–400.
  • Li et al. (2016) Li, M., Chen, F., Wang, W., Tu, C., NOV 2016. Sparse RBF surface representations. Comput. Aided Geom. Des. 48, 49–59.
  • Liao et al. (2015) Liao, T., Xu, G., Zhang, Y. J., 2015. Atom simplification and quality T-mesh generation for multi-resolution biomolecular surfaces. In: Isogeometric Analysis and Applications 2014. Lecture Notes in Computational Science and Engineering. pp. 157–182.
  • Liu et al. (2015) Liu, T., Chen, M., Lu, B., MAY 2015. Parameterization for molecular Gaussian surface and a comparison study of surface mesh generation. J. Molecular Model. 21 (5), 113.
  • Liu et al. (2018) Liu, T., Chen, M., Lu, B., 2018. Efficient and qualified mesh generation for gaussian molecular surface using adaptive partition and piecewise polynomial approximation. SIAM J. Sci. Computing. 40 (2), B507–B527.
  • Lu et al. (2008) Lu, B. Z., Zhou, Y. C., Holst, M. J., Mccammon, J. A., 2008. Recent progress in numerical methods for the poisson-boltzmann equation in biophysical applications. commun comput phys. Commun. Comput. Phys. 37060 (2), 973–1009.
  • Maatta et al. (2016) Maatta, J., Schmidt, D. F., Roos, T., JUN 2016. Subset Selection in Linear Regression using Sequentially Normalized Least Squares: Asymptotic Theory. Scand. J. Stat. 43 (2), 382–395.
  • McGann et al. (2003) McGann, M., Almond, H., Nicholls, A., Grant, J., Brown, F., JAN 2003. Gaussian docking functions. Biopolymers 68 (1), 76–90.
  • Peng et al. (2007) Peng, J., Li, K., Irwin, G. W., Jan 2007. A novel continuous forward algorithm for rbf neural modelling. IEEE Transactions on Automatic Control 52 (1), 117–122.
  • Perkins et al. (2003)

    Perkins, S., Lacker, K., Theiler, J., Mar. 2003. Grafting: Fast, incremental feature selection by gradient descent in function space. J. Mach. Learn. Res. 3, 1333–1356.

  • Qian et al. (2017) Qian, X., Huang, H., Chen, X., Huang, T., OCT 2017. Efficient construction of sparse radial basis function neural networks using l-1-regularization. Neural Networks 94, 239–254.
  • Richards (1977) Richards, F., 1977. Areas, volumes, packing, and protein-structure. Ann. Rev. Biophys. Bioengineering 6, 151–176.
  • RISSANEN (1978) RISSANEN, J., 1978. Modeling by shortest data description. Automatica 14 (5), 465–471.
  • Ronneberger et al. (2015) Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Springer International Publishing, Cham, pp. 234–241.
  • Samozino et al. (2006) Samozino, M., Alexa, M., Alliez, P., Yvinec, M., 2006. Reconstruction with voronoi centered radial basis functions. In: Proceedings of the Fourth Eurographics Symposium on Geometry Processing. SGP ’06. pp. 51–60.
  • Schölkopf et al. (2007) Schölkopf, B., Platt, J., Hofmann, T., 2007. Efficient Structure Learning of Markov Networks using L1-Regularization. MITP.
  • Schwenker et al. (2001)

    Schwenker, F., Kestler, H., Palm, G., MAY 2001. Three learning phases for radial-basis-function networks. NEURAL NETWORKS 14 (4-5), 439–458.

  • Sermanet et al. (2013) Sermanet, P., Eigen, D., Zhang, Xiang andMathieu, M., Fergus, R., LeCun, Y., 11 2013. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv e-prints, arXiv:1312.6229.
  • Shen et al. (2013) Shen, C., Li, H., van den Hengel, A., DEC 2013. Fully corrective boosting with arbitrary loss and regularization. NEURAL NETWORKS 48, 44–58.
  • Szegedy et al. (2016) Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A., 2 2016. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv e-prints, arXiv:1602.07261.
  • Taylor et al. (1979) Taylor, H., Banks, S., Mccoy, J., 1979. Deconvolution with norm. Geophysics 44 (1), 39–52.
  • Vidaurre et al. (2010)

    Vidaurre, D., Bielza, C., Larranaga, P., Oct 2010. Learning an l1-regularized gaussian bayesian network in the equivalence class space. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 40 (5), 1231–1242.

  • Wang et al. (2019) Wang, J., Olsson, S., Wehmeyer, C., Perez, A., Charron, N. E., de Fabritiis, G., Noe, F., Clementi, C., MAY 22 2019. Machine Learning of Coarse-Grained Molecular Dynamics Force Fields. ACS Cent. Sci. 5 (5), 755–767.
  • Weiser et al. (1999) Weiser, J., Shenkin, P., Still, W., MAY 1999. Optimization of Gaussian surface calculations and extension to solvent-accessible surface areas. J. Comput. Chem. 20 (7), 688–703.
  • Xu et al. (2015) Xu, L., Wang, R., Zhang, J., Yang, Z., Deng, J., Chen, F., Liu, L., 2015. Survey on sparsity in geometric modeling and processing. Graph. Models 82 (C), 160–180.
  • Yu et al. (2006) Yu, Z., Jacobson, M. P., Friesner, R. A., 2006. What role do surfaces play in gb models? a new-generation of surface-generalized born model based on a novel gaussian surface for biomolecules. J. Comput. Chem. 27 (1), 72–89.
  • Zhang et al. (2015) Zhang, B., Peng, B., Huang, J., Pitsianis, N. P., Sun, X., Lu, B., MAY 2015. Parallel AFMPB solver with automatic surface meshing for calculation of molecular solvation free energy. Computer Physics Communications 190, 173–181.
  • Zhang et al. (2006) Zhang, Y., Xu, G., Bajaj, C., AUG 2006. Quality meshing of implicit solvation models of biomolecular structures. Comput. Aided Geom. Des. 23 (6), 510–530.